How Influencer-Generated Content Has Become Core to Brand Strategies
Adrian Pennington
TL;DR
In the ever-evolving landscape of digital marketing, the creator economy has emerged as a powerful force, reshaping the way brands connect withconsumers.
An overwhelming majority of brands are using creator-generated content for channels beyond social media, highlighting its versatility and reach, a new survey finds.
The terms “creator” and “influencer” tend to be used interchangeably, but marketers are applying different metrics to judge the performance of each.
Influencer-generated content is now core to brand strategies, with marketers increasingly savvy about the differences between creators and influencers and how to measure their performance.
A recent study conducted by creator marketing platform LTK underlines the profound impact of creator marketing, an industry now estimated at $21 billion globally.
Next year, worldwide, marketers are expected to spend more than $32 billion on influencer marketing. Influencer spend is now outpacing traditional ad investment, with 80% of brands saying they increased creator budgets in 2023, per the report.
Some 92% of brands plan to increase their spending on creators in 2024, and 36% plan to spend at least half of their entire digital marketing budget on creators.
Because of what LTK calls the “significant trust” creators have built with their communities, the majority of brands it surveyed said consumers are turning to creators the most compared to social media ads and celebrities.
An overwhelming majority of brands (98%) are using creator content for channels beyond just social media, highlighting its versatility and reach.
Indeed, when asked where their marketing dollars are shifting, creator marketing and connected TV shared the top position overall for investment growth, beating out channels like paid search and paid social.
The study also found that dollars are being moved from digital ads to creator marketing because the scale of creator marketing has proven to be more efficient when compared to side-by-side, all-cost measurement.
Marketers, however, are becoming more discerning about the difference between influencers and creators.
“As marketers have got more comfortable with the creator economy, influencers have become the go-to for performance marketing, while creators are considered more for branding purposes,” says Krystal Scanlon, writing at Digiday.
Marketers are feeling the pressure to be super transparent and efficient about their purchases and the reasons behind them. This means they’re getting specific about when it’s better to collaborate with an influencer versus a creator.
Lindsey Bott, senior content manager at Ruckus Marketing, told Scanlon, “Previously, influencer involvement might have organically emerged in ongoing discussions. Now, we’re seeing brands come to us more frequently with well-defined briefs or specific suggestions right from the outset.”
The days of pay-for-reach deals are long gone, it seems. In fact, influencers increasingly have specific metrics, such as engagement rate, CPM, CPE, clicks, click-through rate and conversions, tied to them.
For example, Bott’s team has observed clients gravitating toward influencers due to their established reach and engagement metrics, emphasizing performance-driven results.
Conversely, there’s a growing interest in creators who prioritize crafting genuine, narrative-based content that closely aligns with a brand’s values and campaign themes.
“They’re unbelievable storytellers who can really shape perception,” Keith Bendes, VP of strategy at Linqia, reports at Digiday.
Unlike influencers, creators usually don’t have the same set of metrics tied to them.
“Over time, as marketers understand how a specific creator’s content performs when repurposed on their social channels or paid media, they may start to benchmark specific benchmarks for that creator’s assets,” said Lindsey Gamble, associate director at influencer marketing platform Mavrck.
According to Scanlon, this shift underscores how brands are distinguishing between utilizing audience influence and cultivating content that profoundly connects with their intended audience.
“Creators have evolved into valuable assets for brands, capable of driving substantial business impact,” says Rodney Mason, VP and head of marketing at LTK, writes at Adweek. “As we move into 2024, creator marketing is fundamental shifting how brands engage with consumers. Those marketers who embrace the rise of creators will find themselves at the forefront of this transformative wave. The time to invest in creators and their unique ability to influence, engage and build trust with consumers is now.”
In a recent webinar, “The Next Wave of Creator Marketing: 2024 Forecast,” LTK’s director of strategy insights brand partnerships, Ally Anderson, shares more detail about how “creator guided shopping” is becoming the foundation for marketing efforts and now influencing consumers through all aspects of their discovery journey. Watch the full presentation in the video below:
Continued growth will be driven by marketing through short-form videos on platforms like Instagram, TikTok, and YouTube.
November 20, 2023
Posted
November 20, 2023
“The Africas:” How Do You Capture “the Soul” of a Place?
TL;DR
Australian production company Grafton Create traveled to Egypt, Ethiopia, and Nambia to shoot “The Africas” using the Sony FX6.
Director, DP, and editor Elliot Grafton says the FX6 was “the obvious choice…based on its low-light capabilities” because they wanted to shoot “a lot of dark churches and low light situations.”
The planning and research process took two months of work, and they still had to remove two countries from their original itinerary, due to changes in safety and visa issues.
“The Africas,” the latest travel film by production company Grafton Create, showcases the lesser known sides of Egypt, Ethiopia and Namibia.
“We really wanted to dig deeper and find a story that had a little bit more meaning for us,” explains Georgie Woskett, cofounder of Grafton Create and first AD on the project. The duo also sought “to represent a number of different cultures and religions,” while filming “something really spectacular visually.”
“We want to capture the soul of the place,” Woskett says. “And that is what ultimately gives us meaning.”
Woskett says, “We knew that, given the amount of logistics that were involved in this, we had to be very intentional about what we were filming. And so we had a shot list for each of the countries, which then helped us create an itinerary.”
Nonetheless, these locations were a risk in and of themselves, Woskett admits.
The research process took about two months because “there’s not a lot of information around these particular places that we went to online,” she explains. After all of the work, Woskett says, the shot list and background research became “this Bible to keep referring to.”
“Approaching it the way that we did, which was extremely sort of well planned, whilst allowing ourselves and giving ourselves the permission to be flexible in the moment, was quite different for us,” Woskett says. “Typically, we’re sort of run-and-gun, but we sort of tried to balance this out being quite, quite intentional with what we were going to be shooting and knowing exactly, sort of, what we expected from ourselves.”
Finding the Story, While Pivoting
Despite the meticulous planning, their original schedule featured Algeria and South Sudan, but they had to pivot and go to Ethiopia, due to visa problems and safety concerns, recalls director, DP, editor and Grafton Create principal Elliot Grafton.
“We push ourselves to really find those stories and those locations and people,” Grafton says, “to tell a different side of the story, and to show a different side of that country.”
For example, In Ethiopia, Grafton and Woskett and second shooter Brannon Jackson were able to visit the Tigray region, in the north, which Woskett says was “definitely a favorite.”
In Tigray, they filmed Abuna Yemata Guh, which she describes as a “sixth century rock hewn church at the top of a limestone cliff that we had to rock climb to. It’s 2000 meters above sea level, and so the views from there are really spectacular.”
Woskett highlights Abuna Yemata Guh’s role as refuge during the Ethiopian civil war, as well as “the spirituality of the sixth century churches and the rock art and the priests” and says “it was just such an honor to be able to see that and just so memorable.”
For his part, Grafton says visiting the Hoanib Valley Camp in Namibia was particularly memorable because they were able to film desert lions. One of the film’s goals, he says, was to “capture wildlife in a new, unique way.” In particular, he notes a shot of “the lion paw… scooping through the sand” as an unusual perspective from a rarely seen animal.
The Equipment (and the Risks)
The project, produced in partnership with Sony, was shot on the FX6, which Grafton says was “the obvious choice…based on its low-light capabilities.” The team wanted to shoot “a lot of dark churches and low light situations,” he explains.
Prior to the Sony FX6, Grafton says they had been “using the i7S3, which is a smaller camera body, which we had used for a number of years.” For their next investment, he says they “wanted to level up” and select a cinema-grade camera.
With the blessing (and financial backing) of Sony, Grafton sought to put the FX6 through its paces: “We really wanted to put in a lot of different conditions and situations.” That meant “putting it underwater, on land and also in the air.”
One of Grafton Create’s company values is to take risks, and “The Africas” didn’t shy away from that North Star.
“It was kind of a risk … not having shot on that gear before and going into this shoot. Putting in so many different situations that were very unfamiliar to us, was quite a big risk. But yeah, we really wanted to show off the true capabilities of the camera and really push ourselves to the limit, just you know, as much as we could safely,” Grafton says.
In addition to the camera being new, they added a new drone to their kit — receiving it about a week before heading from their home base in New South Wales to Africa. They were able to get in “maybe two test flights” with the FPV cinelifter drone, Grafton says, before they packed it into its own suitcase. The FPV is “not like a normal consumer drone,” he says.
The drone was integral to some of “The Africas’” most stunning shots, but Grafton explains they “had to be really selective on when we used the drone” because they brought only “one set of batteries, so we only had five minutes of flight time” per charge and “it takes 40 to 50 minutes to charge the battery.”
Fortunately, while the FPV “can carry very heavy cameras,” Grafton says, the “FX6 is so light” they were able to maneuver “with a lot of mobility with it and a lot of speed.”
While using the drone, Grafton opted for a “Sony 14 mm [lens], just because it’s so light. And you know, it’s got a wide field of view, similar to a GoPro. So just showing off that landscape was really important to us.”
Back on the ground, Grafton says they used Sony 35mm and 50mm prime lenses “90% of the time” and also broke out the 600mm for wildlife shots. He emphasizes the lenses’ low light capabilities, with apertures as low as 1.4 and 1.2.
“Those lenses are just incredible. Just they’re so sharp and snappy on the autofocus. We knew there was going to be a lot of run-and-gun situations,” Grafton says. “A strong autofocus is really important to us.”
Shooting What You Feel
The prime lenses meant that Grafton and Brannon had to “really be in the thick of it.” For example, in the church scene in Abuna Yemata Guh, Grafton says, “We had to… go right up close next to the priest praying. … we’re really immersed in the moment because it was so close to him looking at the screen.” Again and again, Grafton says, this proximity “really shows through the work.”
The editing phase is also crucial to achieving Grafton Create’s goal of shooting what they feel, not what they see.
The final film contains “5% of what we’ve shot,” estimates Grafton. “I think we had like 20 terabytes from this trip. And you know, there’s only a six-minute video from it. So there’s a lot of stuff that people don’t see.”
In addition to cutting, they create the emotion by tapping into their own feelings.
Grafton says, “It’s in the music that we choose. we really don’t want to just pick any old song. We want to listen to the song and be like, in tears pretty much or you know, get goosebumps.”
If they “have tears or goosebumps by the end of the video, we know we’ve done our job,” Grafton says.
What’s next for Grafton Create? “We want it to be something that people haven’t really seen before. And it will be in very cold conditions,” Woskett shares.
Rewriting the Visual Rules for “The Hunger Games: The Ballad of Songbirds and Snakes”
DIRECTOR FRANCIS LAWRENCE ON “The Hunger Games: The Ballad of Songbirds and Snakes”
TL;DR
Director Francis Lawrence explains how the appearance of a new book by franchise author Suzanne Collinscompelled him to do another “Hunger Games” movie.
He says the biggest challenge was to have the audience root for young CoriolanusSnow in the knowledge that he will become the fascist dictator of Panem.
While the film franchise split the final book of the trilogy into two films, in a move that angered fans, Lawrence was adamant he didn’t want to make that mistake again.
The director referenced Gene Wilder’s Willy Wonka for the character of chief gamemaker Dr. Volumnia Gaul, played by Viola Davies.
Regular director of photography Jo Willems says that even though these are sci-fi movies, he tries to shoot them in a naturalistic way.
“Welcome to Panem,” from “The Hunger Games: The Ballad of Songbirds and Snakes”
Let the games begin — again. The Hunger Games are back, this time as a prequel telling the story of young Coriolanus Snow, who will grow up to be the tyrannical dictator ruling the sci-fi dystopia of Panem in the four previous hit films.
Also returning is director Francis Lawrence, of whom Jacob Hall at Slashfilmsays, “Through his lens, what could’ve been a boilerplate YA series has leaned into the aggressive, the political, and the deeply moving.”
While Gary Ross directed the first film in the series — adapted from Suzanne Collins’ dystopian novel of the same name — Lawrence came aboard for the sequel, 2013’s Catching Fire, and stayed to helm the climactic two-part finale, 2014’s Mockingjay – Part 1 and 2015’s Mockingjay – Part 2.
“I thought I was done,” Lawrence tells A.frame‘s John Boone, “but not because I didn’t want to do more. I thought I was done because Suzanne, the author, was like, ‘I’ve been on this thing for 10 years. I’m going to write plays. I’m going to do other stuff. I’m done.’ Which I could totally understand. I’d been on the movies for three, four years, so I certainly wanted to do something else for a minute, too.”
In 2019, Lawrence and franchise producer Nina Jacobson received a call from Collins. “She said, ‘Hey, I know this is a bit of a surprise, but I’m almost done with a new book.'”
That book is The Ballad of Songbirds and Snakes, adapted by Lawrence into a script that took him two years to write.
“I think the only thing that intimidated me is that I feel like people are conditioned to believe that a Hunger Games movie is over when the games are over,” he tells Boone.
“So there’s just this feeling that people have, ‘Oh, you build up to the games. You get to the games. The games are over. Movie done.’ And the truth is, all the questions that are set up at the beginning of the movie are not answered by the end of these games, and there’s still a fair amount of story to tell.
“I found that very exciting. I liked that there was a different structure, that it wasn’t just ending with the games, that the games are just part of a much larger story — especially for Snow. But I knew that we were going to have and will always probably have a bit of a hump, just because people are conditioned to feel that.”
There’s a vogue for lengthy cinema experiences, and at 157 minutes this movie is no exception. The Hunger Games has form, though, in splitting the final book of the trilogy into two films. Lawrence was adamant he didn’t want to do that again.
“We got so much s**t for splitting Mockingjay into two movies — from fans, from critics,” he tells Boone. “And weirdly, I understand it now. It’s episodic television or something.
“You can either binge it or you wait a week and a new episode comes out, but to say, ‘You have an hour-and-a-half-long episode of TV and now you have to wait a year for the second half,’ that’s annoying, and I get it. So, that was not going to happen on my watch this time around.”
Creating Coriolanus Snow and Volumnia Gaul
He says the biggest challenge in nailing the narrative was to have the audience root for Snow (Tom Blyth), in the beginning, “empathizing with him, while making sure that the elements of the need for ambition, some of the greed, some of maybe the genetic darkness that’s in him from his father, that all those seeds are planted. So eventually, in his descent into darkness, you find it sort of truthful.”
That arc is reminiscent of characters like Annakin Skywalker’s transition to the dark side over the course of Star Wars episodes one to three, but Hall tries to draw Lawrence into making explicit parallels with Donald Trump.
“It’s a real 2023 mood for a movie to be about how this person you really like is actually a fascist. It feels very timely right now,” he says.
Lawrence replies, tactically, “Yes, yes, for sure. But we get to see him formed into one. He doesn’t start as one.”
For Viola Davies’ character of chief gamemaker Dr. Volumnia Gaul, the director’s reference was Gene Wilder’s Willy Wonka.
“One could consider her a villain in this movie, but she does think she’s doing the right thing and what she’s doing is important,” he said to Boone. “She certainly is a very specific voice, philosophically, in the movie. But the Willy Wonka reference was more that her joy is actually in the creativity of the work that she’s doing, which informs the hair, the makeup, the wardrobe. That joy and the odd, creepy creations reminded me a little of the sinister underpinnings of Gene Wilder’s Wonka. That was my reference for her, which she totally got!”
He adds, “I have to admit, I was a little nervous bringing that up to her, but she totally got it and completely went for it.”
Creating a Naturalistic Look for a Sci-Fi Franchise
Also returning is Belgium cinematographer Jo Willems, with whom Lawrence has worked since starting out shooting music videos (graduating to work for the biggest names in the business including Justin Timberlake, Pink and Lady Gaga). Willems shot the second through fourth movies in The Hunger Games franchise and Red Sparrow, also directed by Lawrence — all starring Jennifer Lawrence.
While Catching Fire was shot on 35mm with anamorphic lenses, “over the years we progressed our style, we went into digital and then ended up shooting large format,” he tells Gordon Burkell at Filmmaker U. “We always try to get more and more intimate with the characters and we have just ended up shooting wider and wider lenses. Even though they are sci-fi movies, I try and work in a naturalistic way.”
For this installment of the series, Lawrence and Willems opted for the Alexa’s 65 mm after Lawrence had worked with it on Apple TV+’s “See,” according to an interview with IndieWire. “I really started to see how, sort of, different the depth of field is and how the lens is for that camera and that sensor in general,” Lawrence explains. “You can actually go much wider without getting that kind of warping and distortion on people’s faces. … You really get that intimacy and maintain the scope at the same time.”
Lawrence tells Filmmaker U, “I also like shooting in very natural light, so a large part of the movie, where you end up in all these natural light landscapes, I think they look stunning.”
The director says he enjoys post-production more than shooting which he finds really stressful. “I wake up every morning with a knot in my stomach because you really only have that day to try to get the scenes that are assigned to that day,” he says in a first-person essay written for MovieMaker Magazine. “So many things could go wrong — could be somebody’s personality, could be somebody’s sick, could be something’s broken, or something’s not working, or we didn’t plan something correctly, or it’s raining and you need sun. I find it so constantly stressful.
“But post, once you have all the material, you come home and the lifestyle is much more civilized again, and you sit with your editor and you go through it all, and then you see the movie come together in a whole new way. And there’s something really gratifying about that.”
Presumably, if this film is a hit, there will be another story set in the franchise to come.
“I would totally do another one, but it’s all up to Suzanne,” he says to Boone. “It’s the same as after the Mockingjays. I said, ‘I would come back 100 percent if asked.’ But it’s got to come from the mind of Suzanne, because she truly is the author of these things. But also, she writes from theme and writes from a real idea, and I think that’s what gives these stories their substance and their relevance.”
“We knew the film would have a variety of strong looks,” says Senior Colorist Dave Hussey of The Hunger Games: The Balled of Songbirds and Snakes. Hussey, who works at Company 3’s Santa Monica, California facility, had collaborated frequently with director Francis Lawrence and cinematographer Jo Willems, both individually and together, on projects including the Apple TV+ series See and Jennifer Lawrence spy thriller Red Sparrow.
As they always did, Hussey and Willems started the process of developing looks during preproduction of this prequel. Starring Rachel Zegler as Lucy and Tom Blyth as a young Coriolanus Snow, Songbirds and Snakes is set 64 years prior to the events in The Hunger Games. The discussions between colorist and cinematographer would involve fleshing out a look for the film, which would be re-introducing viewers to a very different District 12 than they’d seen in previous Hunger Games movies.
“The Arena needed a gritty look for the fight scene,” Hussey says of that key location. The primary city, Panem, would be represented in its earlier times, still feeling the results of a recently concluded war. “It needed an industrial feel for the urban parts of District 12 and a richer and more colorful look for the rural scenes. We also knew we would want a very stylized and ominous look for the hanging tree” — the infamous site of many executions which is frequently referred to in previous films but shown for the first time here.
Hussey created one main show LUT (lookup table) for Willems to shoot through, which would display for the cinematographer and all department heads an approximation of a final look on set and subsequently in dailies. “Since the prequel is set 64 years in the past just, after an attempted insurrection, we wanted to give the film a grittier look for many of the scenes. We also developed a LUT that helped get us there,” Hussey says. “In some scenes, like those with the hanging tree, we wanted to push the grittiness even further, so we blended a combination of LUTs together to achieve this.”
Once time for the final grade came around, Company 3 started receiving the stream of VFX shots that would be part of the film. VFX Supervisor Adrian De Wet would come into the color sessions at this point to offer input about how all the shot elements and the color was coming together.
“Grading the arena fight scenes was particularly enjoyable,” Hussey recalls, “because they took place during different times of day and involved a massive number of VFX shots. Adrian, Jo Willems, Francis Lawrence and I collaborated closely to bring all aspects of every shot together. We had recently worked in a similar way on ‘Slumberland’ for Netflix, which also required blending a large amount of VFX work.
“We did an overall balance of the scenes using different color temperatures to accentuate the different times of day and then by adding grain, using Power Windows with tracking, and making use of a number of sharpening tools we worked to subtly help guide where we wanted the audience’s eyes to be looking moment by moment.”
Hussey also used Resolve to handle some specific work isolating each of Coriolanus Snow’s eyes, making them the intense blue that the character of Snow (a younger version of the villain played by Donald Sutherland in the earlier films) called for.
As happens sometimes in post, they were asked to present a graded trailer to the studio before completing the final grade for the film. “We were able to nail down many of the looks during this time,” Hussey recalls. “That was very helpful because at that point we had locked down many of the scenes before we started the actual grade of the film. Jo and Francis are both very decisive when it comes to coloring so things moved very quickly.”
For the grain Hussey added the production decided to go with Live Grain Real-Time Motion Picture Film Texture Mapping — a proprietary tool licensed on a per-project basis, which many swear by due to the large amount of control over it offers users to adjust fine detail. “We went fairly aggressive initially with the grain and then backed it off a bit as we finessed all the scenes,” Hussey notes.
While colorists often grade projects that have a Dolby HDR deliverable first in HDR and derive P3 and subsequent versions from there, Hussey started in P3 (still by far the version most cinemagoers would see) and then, once the filmmakers signed off on that, executed new separate passes for Dolby Cinema and then Dolby Vision for streaming and other home deliverables as well as an IMAX pass also did for select theaters.
“In terms of the look, we always tried to stay relatively true to the P3 theatrical version,” Hussey explains, adding, “Of course, the HDR versions have a little more pop in the blacks and the highlights, but it wasn’t about making a new look. It was entirely about enhancing the P3 version that Jo and Francis signed off on.”
Cinematographer Charlotte Bruus Christensen approached the FX series “like a seven-hour movie.”
November 20, 2023
Gavin Guidry: How to Get Great Content From/With/By Creators
TL;DR
Gavin Guidry, creative director at Spotify, maintains there’s a “massive disconnect” between the mindset of a marketer and that of a creator and outlines what can be done to address this. Watch his full presentation above.
By putting creators in the driver’s seat, brands can create content that is unique and relevant and that cuts through to audiences.
The secret ingredient to all of this is community “because relevance comes through creating consistent impact.”
Brands can no longer just post content and hope that audiences on social media platforms will actually see it. They either need a massive spend, or they need to cross collaborate or pay to collaborate with a creator to actually get their content seen.
“Really what we’re seeing is a brand’s ability to impact audiences going down, while creators’ ability to impact audiences is steadily moving upwards,” says Gavin Guidry, creative director at Spotify. “That means the road to relevance must go through real people.”
In a video published on Vimeo titled “The Road to Relevant Video Content,” held as part of Vimeo’s Outside the Frame event, Guidry talks marketers through the ins and outs of a successful influencer collaboration. Guidry heads up Spotify’s podcasts, working with creators and brands to create content.
He claims there’s a “massive disconnect” between the mindset of a marketer and that of a creator.
What Creators Want to Make
A marketer cares about KPIs, ROI and brand perception, he says, but creators care about authenticity and connection to their community above all else. They don’t always know what marketers want, and marketers don’t always know what creators want.
“But when creators have a say in making content, you get content that’s authentic and connected to their community, and it can help you check your marketing boxes as well. Working with creators helps your brand actually build credibility.
“The good thing about working with creators from a fan perspective is that monetizing doesn’t feel like buying — it feels like supporting a creator that they love.”
Some 49% of consumers says they rely on influencer recommendations for their purchasing decisions.
“People trust people more than they do brands, and algorithms are responding to that. Hiring an influencer to create your video content is a winning strategy, but the collaboration can be fraught.”
Creating Impact
The secret to success is community “because relevance comes through creating consistent impact.”
Guidry insists, “It’s not about chasing cultural relevance; it’s about earning community relevance.”
He outlines three steps to create impact through community: get vulnerable; collaborate with influencers; use video.
Vulnerability is not a marketing metric, or a business tactic. It’s more of a soft skill, he explains. It’s about showing that your brand is human.
“When a brand doesn’t open themselves up, they don’t ask the community what they want, they just give them content without really asking. And that can end up exploiting a community. The goal is to ramp up your vulnerability by asking your audience, what they want.”
Guidry urges brands “to embrace risk” because getting vulnerable requires exposure to meaningful risks.
This will lead to better creator collaboration. It means going deeper than demographics to truly understand the creator’s audience.
“The term creator is really broad — they could be comedians, writers, hosts, musicians, even activists — but the thing that binds them all together is that they make content that nourishes audiences. So it’s important to know that creators have their own audience, their own style and their own motivation.”
For creators, their audience is “what they spend their blood, sweat and tears curating with their content. You don’t want to ask them to do anything that their audience will find unauthentic.”
Instead, seek to understand their audience and what you can offer them through this partnership.
Creator, Collaborator, Partner
Think about style — the way a creator talks or the way that they create for their audience. Don’t present a campaign that fits outside their style, but do seek their input on how their content comes to life through their unique lens.
Consider a creator’s motivation. Guidry says there’s a bit of a misconception with creators that it’s all about the money.
“That couldn’t be further from the truth. Creators are probably more excited than you are to work with big brands. It’s like a feather in their cap. They’re able to say, ‘Hey look audience, I’m now able to work with these brands.'”
But don’t just seek a transactional deal with creators, Guidry advises. “Seek to build a long term relationship that creators can talk about with their audiences over time. Offer a mutually beneficial partnership that results in creators raising their profile through your partnership.”
Of course, you want to make sure you find a creator with the right niche and an engaged audience. You want to make sure that creator is an authentic user of your brand or product and that they have a strong style and POV.
“You also want to make sure that they’re professional, and that they have craft that can elevate your brand.
“Lastly, use video. Over 200 million people consider themselves as creators, and this means that your audience is just a resource of creativity waiting to be unlocked. You can use video and creative partnerships to do just that. It’s the best way to engage audiences with CTAs and educate in relevant ways.
“You get video that’s raw and real and shows your brand is human. And you get to sit back and let your videos woo a built-in audience.”
He sums up: “So if you didn’t hear anything else I said today, when you prioritize community and build authentic creator relationships, you can create relevant video content.”
An Artlist survey of business leaders and more than 7,000 creators to uncover the trends your business needs to know in 2024, revealed that Gen Z is at the forefront of this scrutiny, demanding even more tailored and genuine experiences.
Artlist suggest brands follow these steps to create a smooth working relationship:
1. Open and clear communication
Discuss expectations, objectives, creative freedom, timelines, and deliverables from the outset. It’s also important to have regular feedback sessions and open two-way communication throughout the process.
2. Get to know each other
The whole process and the end results will always be better if both the brand and the creator take the time to get to know each other. Share information about working styles, audience, and creative vision and see if it’s a good fit before starting the project.
3. Be fair and respectful
Creators deserve to be fairly compensated for their time and effort, and brands deserve to receive high-quality content on time. Make sure you’ve sorted out all the compensation timing and expected deliverables in a detailed brief and legal contract.
4. Build long-term relationships
Instead of viewing collaborations as one-off transactions, brands and collaborators should look at how they can continue to work together in the future. This continuity is beneficial for the brand and creator and helps the audience see a familiar face or content style.
In today’s digital world, when every brand is also now a content business, brands can’t just rely on providing a good product or service. They also need to genuinely engage their audience and that means working in the right way with the right creator.
Casey Neistat is most famous as a YouTuber, but that wasn’t his goal… his career “wasn’t an option” when he started creating videos.
November 20, 2023
“Saltburn” Is Debauched and Depraved But It Looks Like a Caravaggio Painting. So Let’s Start There.
TL;DR
Emerald Fennell explains how her psychological black comedy “Saltburn” is a satire on the British class system using the vehicle of a grand stately home setting.
Cinematographer Linus Sandgren uses a square aspect ratio and shoots on 35mm to film “disgustingly beautiful moments.”
Fennell says, “We did a lot of work then to make it a physical experience — uncomfortable, sexy, difficult. I thought a lot about the feeling of popping a spot — queasy pleasure.”
Emerald Fennell’s latest cinematic spectacle, Saltburn, savagely peels back the veneer of the British upper class of the mid-2000s, crossing Brideshead Revisited with The Talented Mr. Ripley served with a twist of vampire-infused black comedy.
The film revels “in voyeuristic repulsion and the fetishization of beauty,” writes IndieWire’s Bill Desowitz, told through the point of view of cunning Oxford student Oliver (Barry Keoghan), who becomes infatuated with his aristocratic schoolmate, Felix (Jacob Elordi), following an invitation to stay for the summer with his eccentric Catton family at their titular estate.
“I’m setting out to be honest and unsparing, and I’m not frightened of people not liking it,” Fennel explains to Salon’s Jacob Hall. “I mind if people don’t appreciate the craft or they think I haven’t done my homework, or they think I’ve made decisions that aren’t deliberate. That gets my goat, because that’s a different argument. But if you don’t like it, I don’t mind.”
Fennell’s bold visual plans began with shooting in 35mm to capitalize on the rich color and contrast, and using the 1.33 aspect ratio to enhance the story’s voyeurism.
“She wanted to convey the hot summer and foggy night, influenced by the legendary landscape painter Gainsborough, as well as more dramatic lighting inspired by Hitchcock, Nosferatu, and baroque painters Caravaggio and Gentileschi,” we learn from Desowitz’s interview with the film’s cinematographer, Linus Sandgren (La La Land).
The DP landed the job at the suggestion of Saltburn producer Margot Robbie, who had just worked with Sandgren on Babylon and knew first-hand what dark beauty he could achieve shooting in 35mm.
“I had seen Emerald’s debut film, [Promising Young Woman], where she made some very interesting decisions,” Sandgren said. “For example, letting the lady die in a single take, which was horrible to watch. And then when I got the Saltburn script, I thought it was brilliant. She writes very visually and in a descriptive way and I got some very clear images in my head.”
They both agreed that shooting on film was right for the story, as Sandgren explained following a screening at Camerimage, as reported by Will Tizard at Variety. The medium’s reaction to red light in some key scenes inside the family home was particularly well-suited to the growing sense of horror, Sandgren said. So were close-ups of characters feeling extremes of emotions, with sweat, hair and bodily detail helping to build on the descent into obsession.
To strike just the right tone in these scenes, Company3 colorist Matt Wallach says, “We got into using tools in the Resolve, like the Custom Curves and the Color Warper, to subtly bring out, say, the red lights in a party scene or the steely blue moonlit tones and a night exterior while always keeping the skin tones where they should be. With Linus, skin tone always has priority.”
Sandgren shot with the Panavision Panaflex Millennium XL2 camera equipped with Primo prime lenses to get colors and contrast with under-corrected spherical aberration. It all worked out well to propel the journey into darkness, Sandgren said, growing into other scenes of seduction that push boundaries. All of which just enriches “the bloody cocktail of Saltburn,” he says, noting that, after all, “Vampires are sexual beings.”
When the director first spoke with Sandgren about the project, she described it as a vampire movie “where everyone is a vampire.”
Elaborating on this to Emily Murray at Games Radar, Fennell says she liked the vampire metaphor as a vehicle for attacking the class system and our unhealthy fascination with the rich and famous.
“We have exported the British country house so effectively in literature and film, everyone internationally is familiar with… their workings,” she says. “As we are talking about power, class, and sex, this film could have existed at the Kardashians’ compound or the Hamptons, but the thing about British aristocracy is that people know the rules because of the films we have seen before. We all have an entry level familiarity but the things that are restrained about the genre are overt here — as we look at what we do when nobody is watching us.”
This embodied the vampire ethos at night in all its gothic beauty and ugliness. “Emerald’s attracted to something gross happening, but you see it in a perfectly composed image with the light just hitting perfectly,” Sandgren said in an interview with Tomris Laffly at Filmmaker Magazine. “I think the challenge was finding a language for the film with secrets that you don’t want to reveal and having it seem ambiguous.”
Fennell wrote Saltburn during COVID, when people couldn’t even be in the same room together, “let alone touch each other, let alone lick each other,” she said, commenting on some of the film’s explicit scenes. “This is a film really about not being able to touch. Now, especially, we have an extra complicated relationship with bodily fluids.”
As Laffly prompts, this sounds like Fennell wanted to unleash a beast we all have in ourselves that was so oppressed during lockdown.
“That certainly felt like one of the motives,” she admits. “There’s nothing that is more of a rigid structure than the British country house and the aristocracy, nothing more impenetrable. So yes, to unleash the viscerally human into that arena was so much of it.”
To Fennell, so much of film is “frictionless, smooth, so consistent. And I feel like cinema — without being so grandiose and pompous — is designed to be watched in a dark room of strangers, and it can be expressive, it can be to some degree metaphorical. When I look at the filmmakers that I love [like David Lynch or Stanley Kubrick] these are people who are making films that I feel in my body.”
This idea of foregrounding intimacy led to their decision to shoot within the Academy ratio. Again, she and Sandgren referenced classic portrait painters.
“To do that formal framing, if you’re looking at Caravaggio or lighting in a Joshua Reynolds and that kind of blocking, it is so much easier the more square you are. And I like extreme closeups, especially when you’re talking about sex and intimacy and inhuman beauty,” she told Filmmaker. “If you’re 1.33, you can have a full face. It can fill the frame completely.”
Scenes in the film are deliberately uncomfortable to watch. They are what Desowitz calls “disgustingly beautiful moments,” but Fennell emphasizes that they aren’t in any way there for shock value: “A lot of this film is an interrogation of desire,” she tells the Inside Total Film podcast.
“With this type of love, there has to be this element of revulsion, and for us to feel what Oliver is feeling and understand that, you need to physically react to stuff. We did a lot of work then to make it a physical experience — uncomfortable, sexy, difficult, queasy. I thought a lot about the feeling of popping a spot — queasy pleasure.”
Much of the more salacious coverage of Saltburn has concentrated on its final scene where Keoghan dances stark naked through the estate to Sophie Ellis-Bextor’s hit 2001 song, “Murder on the Dancefloor.” “Everything is diabolical, but it’s exhilarating,” Fennell explained to Jazz Tangcay at Variety. “It’s post-coital, euphoric, solitary and it’s mad.”
As for Sandgren’s camera moves, he pointed out that Oliver was always in frame for most of the film. “But this way, we see him full-figured. I think it was clear we wanted to follow him. Following him through that scene felt more natural to watch everything about him, and watch from the outside. It’s about his physicality and how he feels in that moment.”
It’s a tour de force for Keoghan, who, according to the cinematographer, was fearless throughout, but worked especially hard at rehearsing and shooting the choreographed dance sequence.
In capturing it, Fennell used 11 takes. “They were all very beautiful,” she said. “It’s quite a complicated and technical camera. A lot of the time, he was immensely patient because there was a lot of naked dancing. Take #7 was technically perfect. You could hear everyone’s overjoyed response, but I had to say ‘sorry’ because it was missing whatever it was that made Oliver that slightly human messiness. So, we had to do it a further four times.”
“[I]t was incredibly difficult to do because obviously it’s a oner, and we had to light every room completely from outside without seeing any of the kit,” Fennel tells Salon. “We had to set up all of the sound so we could switch to every room because of the lag, without again, seeing any of the kit.”
Fennell likens the actor to Robert Mitchum, as she explains to Filmmaker Magazine. “I think he’s just exceptional, not just now but for all time — someone like Robert Mitchum is a good comparison. There are actors who have a thing that nobody else has had before, and I think Barry has that.”
Cinematographer Charlotte Bruus Christensen approached the FX series “like a seven-hour movie.”
November 20, 2023
Empire State of Mind: Creativity and Social Collaboration at NAB Show New York
TL;DR
The winner of the “Empire State of Mind” photo contest, Dushawn (Dusan) Jovic, was announced during the “Empire State of Mind: Photo Contest Finale” at NAB Show New York.
With a cash prize of $4,000, this first-of-its-kind contest included a chance to direct a photoshoot on the streets of New York City with social media influencer and expert strategist Avori Henderson.
The session, featuring Bamboo founder and CEO Nick Urbom, explored new storytelling techniques blending photos, videos and narratives, and provided insights into leveraging social media tools and content monetization strategies.
Henderson and Dusan discussed strategies for discoverability, which remains a key challenge for content creators in a crowded social media landscape, including collaborations, hashtags and spec work to attract brands.
Watch the “Empire State of Mind: Photo Contest Finale” at NAB New York 2023.
At NAB Show New York’s Photo+Video Lab, the spotlight was on creativity, innovation and social collaboration with the announcement of the winner and full contest review of “Empire State of Mind presented by Bamboo.” This first-of-its-kind contest was aimed at finding the next great photographer with a cash prize of $4,000 and a chance to direct a photoshoot on the streets of New York City with social media influencer and expert strategist Avori Henderson.
The winner, Dushawn (Dusan) Jovic, was announced during a session entitled “Empire State of Mind: Photo Contest Finale.” Led by Bamboo founder and CEO Nick Urbom and featuring Jovic and Henderson, the session demonstrated new storytelling techniques blending photography and other visual mediums. It also covered social media tools, providing concrete practices for leveraging photos and videos for social media channels, as well as strategies for content monetization and winning methods for crowd-sourcing photo/video projects.
Bamboo is a mobile app and web-based social platform designed for creators to post collaboratively to monetize their efforts. Urbom, the visionary behind Bamboo, has developed and produced a number of cutting-edge experiences for creators to advance their careers, including tech platforms, professional conferences and awards shows. He was the co-founder, CEO and chairman of Infinity Festival Hollywood, and has co-founded three world-renowned trade organizations: the International 3D Society, the Advanced Imaging Society, and the VR Society.
“This is a $100 billion economy with 200 million people involved and there’s a lot of people looking for tools for how they’re going to grow and advance their careers,” Urbom said of the burgeoning creator economy.
The Bamboo platform was founded as the result of countless conversations with creators about the tools and platforms they utilized, as well as their own personal goals, Urbom said. “We were hearing a lot of feedback from people saying that, ‘hey, listen, we can do stuff for free online, we can do stuff with rev-shares, but it would be really nice if we could actually just run our own business, and if we could create content however we wanted to put it out there.’”
“From what I have seen, only Facebook Groups allows you to take a piece of content and put it in front of people who are interested in that same thing,” says Henderson, who was one of the 12 participants in the 2022 reboot of competition The Mole on Netflix. “Bamboo has built a space where you can create these little categories within your following. The most profitable thing you can do on social media is build a community and that makes you money.”
Collaboration, she notes, “is extremely important on social media, it’s one of the best and fastest ways to grow.”
Originally from Serbia, Jovic currently lives in New York City, bringing a distinctive cinematic style to his craft honed over years of working as a videographer and photographer with creators and brands including Saint Laurent.
“His unique color grading style is described as a candidate and lifestyle cinematic vibe, with a hint of subtle sexiness,” says Henderson, “and what’s wrong with that?”
Jovic acknowledges that collaborations can sometimes be fraught but remains a “big fan” of the synergies they can provide. “As a creative when you hear a word collab, you kind of start twitching,” he says.
“But I feel it has to be beneficial for everybody. When you start, you want to look at collabs as building your portfolio. When you’re already established, you want to look at collabs as it has to be beneficial for everybody. It’s always good to meet new people to create to establish new relationships to help other people because if you’re already established, not everything has to be ‘Okay. How can I get paid?’
“It doesn’t matter how big or small influencer or a brand you are,” he continues, “there’s always something if you’re good with creating, if you’re good with expressing yourself, something good is always going to come out of it.”
These “collabs,” Urbom points out, are called “partnerships” and “deals” in the business world, and “whatever you call it, they’re massively on the uptrend in terms of how brands are looking to partner and get their voice out and get their marketing message out.”
Discoverability in a crowded social media landscape remains a major priority for creators. Hashtags, Henderson and Jovic agreed, are an invaluable resource for creators who are just getting their start. “Hashtags [are] a big, big, big factor,” says Jovic. “Yep, thanks to you, hashtag, I pretty much started my career. And that’s how people usually find me.”
Spec work designed to attract the brands a creator wants to work with is another winning strategy. “When I started doing spec commercials for brands and products and actually posting stuff that is relevant to other brands, that’s where my where my business started growing,” Jovic shares.
“Once you start thinking like your ideal audience is when you start attracting your ideal audience,” Henderson advises. “So really think about your target market where you want to hone in on and then make every piece of content catered towards that.”
Check out the image gallery below to see Jovic’s work on the photoshoot with Avori Henderson, which took place on Little Island, a new public park located in the Hudson River, during NAB Show New York:
Advertised as the last Beatles song, “Now and Then” was built from a recording made by John Lennon shortly before his murder in 1980, using the same AI technology that director Peter Jackson used in his documentary “Get Back.”
The AI tool originated from forensic investigation work developed by the New Zealand police and developed by VFX studio Weta.
This could be the first in a long stream of work that’s salvaged or saved using artificial intelligence.
The Beatles, guided by producer George Martin, were famously pioneers of new technology in making seminal studio albums like Sgt. Pepper’s Lonely Hearts Club Band and The White Album; so using an AI tool to complete their final song should be seen as a natural evolution.
As John Lennon’s son Sean Lennon says in a making-of video, “My dad would’ve loved that because he was never shy to experiment with recording technology.”
“Now and Then” was built from a recording made by John Lennon shortly before his murder in 1980, using the same AI technology that director Peter Jackson used in his documentary Get Back to clean up and separate voices in archival recordings.
Co-produced by Paul McCartney and George Harrison’s son Giles Martin, the track features elements from all of the Fab Four — including a Lennon vocal track that was first recorded as a demo tape in the 1970s.
The track was included on a cassette labelled “For Paul” that Lennon’s widow, Yoko Ono, gave the three surviving Beatles in the 1990s as they were working on a retrospective project.
At the time the band members tried to complete Lennon’s demo but considered it unsuitable for release. It wasn’t until July 2022, Jackson told David Sanderson at the Sunday Times, that McCartney contacted him for his help in producing a new version.
From Peter Jackson’s “Now and Then” video for The Beatles
The audio software, called Mal (machine audio learning), allowed Lennon’s vocals to be separated from the demo. The track was then rebuilt with new performances from McCartney and Ringo Starr, along with Harrison’s guitar parts from their shelved recording session in the SC.
As is clear from the making of doc the chief difficulty with using the original cassette recording was that Lennon’s vocal was too indistinct in places with the piano and the sound of a TV playing in the background.
Weta’s AI, developed for Get Back, managed to cleanly isolate the vocals from the background allowing the mix to finally be made.
As Jackson explained to Rob LeDonne at Esquire, the tech originated from forensic investigation work developed by the New Zealand police.
“When it’s noisy and they want to hear a conversation between two crooks or something, they can isolate their voices. I thought that’d be incredible to use, but it’s not software that’s available to the public.
“So, we contacted the cops and we asked, “Do you mind if we brought some tape to the police station and we ran it through your confidential audio software?” So, we did a 10- or 15- minute test and the results were really bad. I mean, they did the best they could. But you realize that for law enforcement, the quality and fidelity of the audio doesn’t need to be good, so it was far, far short of what we could use.
From Peter Jackson’s “Now and Then” video for The Beatles
Weta took the theory of it and made a tool capable of producing high quality audio. “To hear some of those early songs in a fully dynamic way… You realize what you’ve been hearing is quite a limited range of audio,” Jackson said. “You don’t realize how crude the mixing on some of the early songs were, how muddy they were.”
In the making-of video below, McCartney expresses his doubts about making full songs out of Lennon’s demos, out of respect for the late songwriter’s unfinished work.
“Is it something we shouldn’t do?” McCartney says. “Every time I thought like that I thought, wait a minute, let’s say I had a chance to ask John, ‘Hey John, would you like us to finish this last song of yours?’ I’m telling you, I know the answer would have been, ‘Yeah!’”
Martin says human creativity is still at the heart of the song. Even if AI was involved, it wasn’t used to create synthetic Lennon vocals. “It’s key for us to make sure that John’s performance is John’s performance, not a machine learning version of it,” he told David Salazar at Fast Company. “We did manage to improve on the frequency response of the cassette recording…but it’s critical that we are true to the spirit of the recording, otherwise it wouldn’t be John.”
Jackson also directed the video for the track which contains a few precious seconds of the first-ever film ever shot of The Beatles on stage in Hamburg in the early 1960s.
“A Beatles music video must have great Beatles footage at its core,” he said as part of a lengthy statement about the project on The Beatles website. “There’s no way actors or CGI Beatles should be used. Every shot of The Beatles needed to be genuine. The 8mm film is owned by Pete Best, the band’s original drummer. Clare Olssen, who produced the video, contacted Best to get a few seconds of his film.”
From Peter Jackson’s “Now and Then” video for The Beatles
Angela Watercutter at Wired suggests that “‘Now and Then’ signals, if anything, not just the last Beatles song but the first in what could be a long stream of work that’s salvaged or saved using artificial intelligence.”
Indeed, Jackson has hinted at the possibility, as The Guardian’s Ben Beaumont-Thomas reports, of more Beatles music to come culled from archival footage he went through when editing Get Back, the eight-hour docuseries about The Beatles.
“We can take a performance from Get Back, separate John and George, and then have Paul and Ringo add a chorus or harmonies. You might end up with a decent song but I haven’t had conversations with Paul about that,” he said.
With 60+ hours of restored footage, three-part Disney+ docuseries provides an intimate counter-narrative to the final days of the Beatles.
November 17, 2023
Posted
November 16, 2023
Virtual Production Isn’t Just a Technology, It’s Now a New Way of Thinking
Watch the panel discussion “The Virtual Production Revolution: A Real Time Love Affair” from NAB Show New York 2023.
TL;DR
A panel of virtual production experts illuminated how this rapidly advancing field isn’t just reshaping collaboration and creative workflows — it’s changing the content itself.
Led by veteran Virtual Production Supervisor Jim Rider, this discussion featured KéexFrame founder & CEO Arturo Brena, VP Toolkit founder & CEO Ian Fursa, and ASHER XR founder & CEO Christina Lee Storm.
VP technologies break down silos between departments, allowing for real-time collaboration and faster iteration, fundamentally changing the way content is produced.
The role of the Virtual Production Supervisor is more vital than ever, allowing various departments to plan more efficiently and effectively.
At NAB Show New York, a panel of virtual production experts illuminated how this rapidly advancing field is reshaping collaboration and creative workflows for content production.
The session, “The Virtual Production Revolution: A Real Time Love Affair,” was moderated by industry veteran Jim Rider, Virtual Production Supervisor at Pier59 Studios, and featured Arturo Brena, founder & CEO of 3D creative studio KéexFrame; Ian Fursa, founder & CEO of educational series VP Toolkit; and Christina Lee Storm, founder & CEO of ASHER XR, which specializes in the development of real-time, virtual production, AI, and emerging technology for linear and multi-platform storytelling.
These industry pros underscored that virtual production isn’t just a technological advancement; it’s a catalyst for a more integrated and collaborative approach in media creation. They discussed how VP technologies break down silos between departments, allowing for real-time collaboration and faster iteration, fundamentally changing the way content is produced.
L-R: Jim Rider, Virtual Production Supervisor, Pier59Studio; Arturo Brena, Founder & CEO, KéexFrame; Christina Lee Storm, Founder & CEO, ASHER XR; and Ian Fursa, Founder & CEO, VP Toolkit.
The panelists, each bringing their unique perspective and experience, shared insights on how virtual production is enabling creators to work more cohesively, enhancing creativity and efficiency. Read the highlights of the discussion below, and to gain even more insights watch the full session in the video at the top of the page.
Going Beyond In-Camera Effects
Setting the stage for the discussion, Rider observed that virtual production is often understood only in terms of in-camera VFX, “you know, Mandalorian-style virtual production,” but that it is in fact a multifaceted discipline encompassing a much broader spectrum.
“We’ve been trying to really sort of set the record straight because there was a lot of excitement that came out of on-set virtual production in the big shows, but that doesn’t apply to everyone,” Storm agreed. Virtual production can actually be divided among four main categories, or “buckets,” she argues: visualization, volume capture, on-set virtual production or in-camera VFX, and real-time workflows.
Real-time workflows, in particular, are continuing to evolve, she said, becoming increasingly accessible to productions of all sizes. “More and more people can use [real-time workflows] because they don’t have to have a massive stage, per se.”
Brena noted that in-camera VFX comprises roughly 30% of the work at his New York-based studio. “All the rest goes more into linear creation, linear animation using real-time rendering.”
Real-time rendering “is the core of virtual production, touching every aspect,” he said. “We use it a lot to adjust workflows. So [for] traditional linear content, let’s say that we’re creating an opener for a show or we’re creating a commercial or something, we take the jump into using real-time rendering technology in order to optimize workflows.”
The Vital Role of the Virtual Production Supervisor
Real-time technology is revolutionizing creative workflows, the panel unanimously agreed, fostering unprecedented collaboration across various departments.
“There is a very defined pipeline for creating linear content that is very segmented in between the different departments,” Brena said, “and now, with this new technology, it allows you to kind of merge and get all these departments together and working like in a single platform within a single unit.” This unity, he said, “massively improves how fast you can iterate.”
Amid these changing dynamics, the role of Virtual Production Supervisor becomes increasingly vital to ensuring that all departments work cohesively towards a unified creative vision. “It is so important to have that person who is the translator, who is the person that understands all sides,” Storm said, going on to warn, “And if you don’t have that, I’ve seen failures where you don’t have that role.”
Communication and preparation are key, and changes to how departments communicate between one another is a major issue, Fursa said. “You can’t have a director or cinematographer talking directly to… artists, because they speak two very different languages,” he cautions, recalling a challenging production day that stretched to 16 hours “just because of bad communication.”
For mid-tier productions, Brena added, there is the added complexity of communication across studios. Another big challenge, he said, was aligning expectations. There’s a lot of misinformation out there, he noted, and it’s the VP Supervisor’s job to educate clients and teams about what’s possible and what’s not.
Storm emphasized the iterative process and managing expectations to avoid disappointment on set. “The power of no is a strong thing,” she said. “And it’s not because I don’t want to deliver. I’m just… trying to make sure you know going in what you can get.”
The Future of Virtual Production
Looking ahead to the future of virtual production, “data-driven content is going to change everything,” Brena said.
He envisions game engine technology allowing for delivery of pre-rendered content that can be adapted in real time according to viewers’ interests or to support monetization. “Data-driven content is definitely the one that I see [getting] the fastest adoption, because it’s usually the one of how are we going to make money,” he said, adding that media companies will soon discover that by versioning out data-driven content “you will probably multiply revenue.”
Piggybacking on the trend for data-driven content, Storm predicts an upswing in location-based and user-generated content. “Sort of like what [happened] during the pandemic, we’re going to be able to see distinct voices come out in play and be able to share stories,” she said. “More than anything, it’s exciting. Because when there are tools that are easier for people to play with, then creativity starts to surge.”
Fursa touted new advancements in image based lighting, noting that color calibration woes for DPs may soon be a thing of the past. “For now we still have to light, and LED walls generally are not color accurate,” he explains. “They’re missing, like, a white chip. So that means that they don’t have the full spectrum” of traditional lighting for film production. “But right now, there’s a huge wave of new technology that’s coming fast.”
Virtual production techniques and technologies are rapidly shaping the future of filmmaking. Here’s what you need to know.
November 13, 2023
Posted
November 13, 2023
“Special Ops: Lioness” Director/Cinematographer Paul Cameron Aims for Action and Emotion
Zoe Saldana as Joe in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
TL;DR
“Special Ops: Lioness” director and DP Paul Cameron talks about working with the show’s star-studded cast and stepping into the director’s chair.
Cameron speaks about finding freedom in restriction and how a cinematography background feeds his approach as a director.
“Sometimes you need to be a bit bold and break the ‘Five Cs of Cinematography’ [camera angles, continuity, cutting, close-ups, and composition] and deconstruct them,” he says.
Show creator Sheridan Taylor is very, very specific about scripts: “If there’s time to do additional lines, or additional shots, or coverage, or anything of that manner, that’s fine, but you’ve got to shoot the script, and you’ve got to treat it like the Bible.”
Cinematographer Paul Cameron, ASC is most known for his collaboration with Tony Scott (Man on Fire, Déjà Vu) and he has taken lessons from the late director’s approach into his own directing work.
In Paramount+ series Special Ops: Lioness, for instance, Cameron takes an unconventional approach to coverage.
In Cameron’s hands, even a standard dialogue scene between two actors has extra dynamism and energy that come simply from looking for unorthodox angles or alternating focal lengths in a manner that might seem counterintuitive.
“The idea of matching singles or overs in a conventional cutting pattern has never really become part of my vocabulary,” Cameron told IndieWire‘s Jim Hemphill. “It’s more about what looks good on each side — a 65mm lens on Nicole Kidman’s side might be better with a 50mm on the other side with Zoe Saldana, or one side might be more emotional at a steeper angle on an 85mm,” he said.
“Sometimes you need to be a bit bold and break the ‘Five Cs of Cinematography’ [camera angles, continuity, cutting, close-ups, and composition] and deconstruct them.”
Zoe Saldana as Joe and Nicole Kidman as Kaitlyn Meade in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
Speaking to Matt Hurwitz at the Motion Picture Association’s The Credits, he added, “With Tony, I learned to just be fearless with cameras and put them in places I think are emotionally appropriate and not necessarily coverage-oriented. Looking for a shot, say, with a steep angle, a little too close, to make it just the right level of uncomfortable if the scene calls for that. It’s a matter of what makes it feel right, as opposed to matching focal lengths on lenses and distances, which many shows do.”
Taylor Sheridan created and wrote the military thriller that also stars Morgan Freeman, Laysla De Oliveira, Michael Kelly and Jennifer Ehle.
The genesis of the story was a real unit that the CIA set up in Afghanistan for handling female prisoners. Taylor’s idea: “What if we take young Special Ops women and put them in situations with high terrorist targets?” Namely, befriending the daughter of a target, or a sister — female to female.
“This way, they could either track and/or take action against high level terrorists. That, to me, was a pretty extreme and interesting idea. It may be slightly different than what Taylor does with a lot of his other shows, but it’s so female-oriented,” Cameron told Owen Danoff at Screen Rant.
He served as the DP on the first two episodes and directed Episodes 5 and 6, and in collaboration with Sheridan and pilot director John Hillcoat established the kinetic visual language for the series.
“Sheridan is very, very specific about scripts,” he says. “If there’s time to do additional lines, or additional shots, or coverage, or anything of that manner, that’s fine, but you’ve got to shoot the script, and you’ve got to treat it like the Bible.”
The challenge is that high-caliber of talents like Kidman, Freeman and Saldana are used to improvising their lines. So how did Cameron handle that?
“It’s just always that situation, like, ‘Listen, we’re going to shoot the lines the way they’re written and then, if there’s an idea, we can either address it together or let’s get Taylor on the phone, and we’ll see if it’s something we want to address or extend a little time to shoot as well,’ he told Danoff. “Again, it seems very constrained, but it’s kind of freeing in the sense that you really have that voice of the writer and that showrunner, and that’s what you’re doing.”
Cameron was also heavily involved with Westworld, helping create the initial look of the show and directing episodes even in the series’ final season.
Working with Jonathan Nolan on Westworld, Cameron saw a director who “had linear beliefs of story and stayed with it, and doing that within the work of television,” he says in part two of his interview with Hurwitz. “The reason I started directing there was because I could see somebody setting the bar as high as I’ve ever seen.”
He also learned how to handle a massive amount of scenes in a limited time window. “We might lose a day for some reason and need to make it up, and even with all my experience, I was, like, ‘Oh, my God — how are we gonna do all this stuff?’ And, inevitably, we did it. And that gave me great confidence when I went to direct on Westworld.”
Zoe Saldana as Joe in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
Zoe Saldana as Joe in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
For Special Ops: Lioness, he had to hand his director of photography hat to cinematographers like Niels Albert, John Conroy and Nichole Hirsch Whitaker, something that can be difficult for someone who’s been in their shoes for so many decades.
Most important, he says, is to make sure to include them in prep as much as possible, evaluating scenes and locations, “And to really be open to big decisions,” he says. “What is this scene about? What are the storytelling aspects, and how are we going to manifest it in this location?”
While Cameron and Hillcoat originally considered using a large format camera, like the Sony Venice or the Arriflex Alex LF, the two settled on the popular Alexa Mini LF for most of the production.
Hillcoat didn’t want to see any lens built after 1980, so Cameron gathered an eclectic set for the shoot, including Canon K35s and Zeiss uncoated Super Speed lenses (with both rear end and no coating). “They all react so differently,” he told Hurlitz. “The K35s have a great softness on large format, falling off on the edges really nicely. The uncoated lenses have different qualities of halation [spreading of light beyond the source] and blooming and flaring. So if there’s something bright, the image just blooms a little, or the top halates a little bit.”
Locations
Baltimore stood in for countless locations in nearby Washington, DC. The production also shot in Morocco and Mallorca, Spain. The ISIS compound seen in the first episode was shot at a location in Marrakesh, as was the first meet between Cruz and her target, Aaliyah (Stephanie Nur), Amrohi’s daughter, filmed in the city’s new upscale Rodeo Drive-like shopping area, Q Street, subbing in for Kuwait City. The show’s wedding sequences were filmed at a beautiful house on the ocean in Mallorca. Additional sets were also built in Mallorca, including the White House Cabinet Room, seen in several episodes. Beach scenes representing The Hamptons were shot at Rehoboth Beach, Delaware, 120 miles from Baltimore.
“The challenge on this show was a lot of it takes place in DC, and we were situated in Baltimore, which is not the easiest place to film,” Cameron told Tom Chang at Bleeding Cool.
“We had to make a lot of Baltimore locations work for DC and get the establishing and aerial shots. It came together, but it’s always a challenge when you’re not in the exact place. I enjoyed going over to Morocco, I had some things directed there, and I had several scenes shot for John on episodes one and two there. It ended up being the better part of seven months. That was a challenge unto itself.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
November 13, 2023
“A Murder at the End of the World” — This Show Has Everything: True Crime, Tech Paranoia and Truly Gorgeous Visuals
“A Murder at the End of the World:” Emma Corrin as Darby Hart. Cr: Chris Saunders/FX
TL;DR
Although set in a glossy, hi-tech world, the creators of FX mystery series “A Murder at the End of the World” embraced imperfection and avoided trying to fix everything in post.
Cinematographer Charlotte Bruus Christensen, ASC talks to NAB Amplify about working with series creators Brit Marling and Zal Batmanglij.
She shot with custom-made detuned lenses and devised LUTs for each of the three principal locations in New Jersey, Iceland and Utah.
At first glance, a murder mystery set at a remote luxury retreat for some of the world’s most influential people recalls shows like The White Lotus and Glass Onion: A Knives Out Mystery, but new seven-part FX series A Murder at the End of the World has a different spin.
“With its time-jumping structure, uniquely eerie tone and warnings about artificial intelligence and climate change, it is also unmistakably the work of the idiosyncratic creators behind Netflix series The OA, Sound of My Voice and The East,” Esther Zuckerman writes in The New York Times.
Those creators are Brit Marling and Zal Batmanglij — a creative team who’ve been together since their first short film in 2007.
Emma Corrin and Harris Dickinson in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin and Harris Dickinson in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Harris Dickinson and Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Their new show — marking the first time Marling, a writer and actor, has stepped behind the camera as director — is an Agatha Christie-inflected whodunit, featuring a Gen-Z amateur detective played by Emma Corrin (Diana Spencer in The Crown).
“I knew that Brit was going to be a natural director; I just didn’t understand how much I would enjoy the experience of watching Brit direct,” Batmanglij tells The Hollywood Reporter. “Certain actors, when they get into the directing chair, just have a sensitivity. I saw Emma[Corrin] and Harris [Dickinson] bloom in certain ways when Brit was working with them, and that inspired me to want to go take acting classes.”
Marling explains, “When you’ve spent a lot of years acting, you’re really aware of what actors need to create their best work.”
Marling also co-stars as the wife of Clive Owen’s tech billionaire, who invites a motley crew of guests including an environmental activist, a roboticist and an astronaut to his Icelandic retreat, where one or more of them wind up dead.
“It was really eerie, actually, to see the number of things that, when we had set out to write it four years ago, were science fiction,” Marling told Zuckerman. “When we talked about any of this stuff with people, we had to explain what is a deepfake, what is an AI assistant, what’s a large language model — how does that work? And then by the time we were editing it, to see everything come to pass.”
Marling tells Vulture, “Directing feels like you’re taking the world-building part to its ultimate conclusion.”
To film their story, Marling and Batmanglij sought out acclaimed cinematographer Charlotte Bruus Christensen, ASC, who shot horror hit A Quiet Place; All the Old Knives, directed by Janus Metz Pedersen; Aaron Sorkin’s directorial debut, Molly’s Game; and Denzel Washington’s film Fences.
“At heart this is a coming-of-age story about a child of the internet who knows more how to live her life in cyberland than in the real world,” Christensen tells NAB Amplify. “The script had this larger than life quality as if the world of the internet can’t be contained or quite grasped.”
She continues, “As a teenager I remember thinking the stars were so beautiful but there was an unfathomable distance between them and me. That is how I think we all felt about the cyberworld in this story. You can’t put it into a cage.”
The Danish DP has enjoyed a long-standing relationship with director Thomas Vinterberg, which began when her own short films caught his attention. This led to Christensen’s first feature film, Submarino, which earned her a Danish Film Academy award for best cinematography. She also shot The Hunt and Far from the Madding Crowd for Vinterberg.
“From what I know, Zal loved The Girl on the Train (shot by Christensen for director Tate Taylor) but it was one of those processes where our agents got in touch. I was in New York having just shot Sharper (dir. Benjamin Caron) so we had our first meeting there and it was like first love. No one who meets Brit can fail to fall in love with her.”
Having shot a number of features back-to-back, Christensen wasn’t particularly looking for a TV project. Instead, it was the co-director’s passion and the story itself that convinced her to take the job.
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Harris Dickinson as Bill. CR: Chris Saunders/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Harris Dickinson as Bill. CR: Chris Saunders/FX
“A Murder at the End of the World:” Harris Dickinson as Bill, Emma Corrin as Darby Hart. CR: Eric Liebowitz/FX
From “A Murder at the End of the World:” Harris Dickinson as Bill. CR: Chris Saunders/FX
“A Murder at the End of the World:” Harris Dickinson as Bill, Emma Corrin as Darby Hart. CR: Eric Liebowitz/FX
From “A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Chris Saunders/FX
From “Murder at the End of the World:” (l-r) Jermaine Fowler as Martin, Emma Corrin as Darby Hart, Ryan J. Haddad as Oliver, Pegah Ferydoni as Ziba, Joan Chen as Lu Mei. CR: Chris Saunders/FX
From “A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Alice Braga as Sian. CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Alice Braga as Sian. CR: Chris SaundersFX
“A Murder at the End of the World:” Ryan J. Haddad as Oliver, Alice Braga as Sian, Javed Khan as Rohan. CR: Eric Liebowitz/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Chris Saunders/FX
The central character’s crime solving cyber skills might recall The Girl with the Dragon Tattoo by Swedish author Stieg Larsson. Christensen says their chief cinematic reference were the films of great Polish auteur Krzysztof Kieślowski, and in particular the Three Colors trilogy (1993-1994 Red, White and Blue).
“We learned a lot from Kieślowski movies and wanted to emulate that tone, something very raw yet cinematic and truthful,” she says. “It’s the way that he took simple ideas and then photographed that idea.
“In these days when you can move the camera so much, even virtually, you can have it fly through a keyhole, under a bed, through a wall; we wanted something that felt raw and which retained those happy accidents, those glitches or scratches that are evidence of something real. We wanted an analog style.”
She elaborates, “Our question to ourselves was how do we make it feel minimalist? For us, perfection was imperfection. We didn’t want to be afraid of imperfections but to embrace all the things that can go wrong and not try to fix everything in post. You really have to work hard to protect that because the instinct from your colleagues in post-production is to fix things.”
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Harris Dickinson and Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
To photograph the series she selected the ARRI Alexa equipped with a set of spherical lenses from Panavision that Christensen had previously used on the three-part FX mini-series Black Narcissus — that Christensen also directed.
“The image needed to be messed up a little and these lenses added that less-than-perfect quality,” she said, explaining that Panavision’s VP of optical engineering, Dan Sasaki, “detuned the lenses to achieve a softness and vignetting to break up the digital sharpness and cleanness and push the lenses to capture a less perfect image.”
She devised LUTs for each of the three principal locations in New Jersey, Iceland and Utah. “The color contrast was important to creating an energy between scenes as we move from white ‘desert’ to ‘red desert,’” she says. “Among our first creative conversations was about how to delineate between the real world and the cyberworld.”
She approached the show “like a seven-hour movie, as one story and one journey in terms of lighting,” operating the A camera with occasional second unit work for pick-ups.
While Batmanglij and Marling swapped directorial duties on the episodes, Christensen lensed all seven over the 100-day production period.
“I love prep and being in control of what we’re doing but here I learned how to prep while shooting,” she explains. “If Zal was directing for three days than Brit would be prepping her next block of two to three days and vice versa but I was busy shooting all the time.
“So when either director came to me with a new idea that they’d thought about I had to be quick to re-evaluate. So, I learned to go and chat with the director who wasn’t directing that day in my lunch break to tap into their thoughts and to prep for the next block while shooting.”
https://youtu.be/Xf7fEXANx2c
The billionaire’s Icelandic retreat recalls the opulence of the Roy family in Succession or the forest mansion in the sci-fi feature Ex Machina. It was built on soundstages in New Jersey and presented the biggest production challenge to the DP. The budget wouldn’t allow for the build of the entire set so they built half, dressed it for half the show, and then flipped it around, dressing the other half of the hotel weeks later.
“It’s a circular hotel but we only had space to build half of it at a time so we’d shoot the one half then, with the other half of the set dressed, we’d shoot the same scenes but in the other mirrored half. We also had to connect those scenes to Iceland. We had snow on the stage to link to snow in Iceland.”
https://youtu.be/RVPx5fcLRdI
Working within a LED Volume might have solved the need to dress and redress the scale of sets but would not have delivered the analog aesthetic they desired.
While the co-creators and directors naturally sing from the same sheet, Christensen says that they were different in the way they executed things.
Making her directorial debut, “Brit is a very organized with thorough prep and previz. She needed that security while Zal allowed for a more spontaneous approach. It’s not quite improvisation but he wasn’t scared of seeing what happens on the day and reacting to that.”
Although she says that the shoot during winter and under COVID conditions in Iceland was particularly tough, Christensen wouldn’t hesitate to work with the duo again.
“Their passion for the story and the camaraderie they bring to set is something to be valued. It was a full on experience but I have to underline that Brit and Zal were an amazing team — which, trust me, does not always happen.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
November 12, 2023
Posted
November 12, 2023
How Juliana Broste Takes Her Video Studio to Go
TL;DR
Filmmaker and host Juliana “Traveling Jules” Broste offers tips for creating an effective and practical studio while traveling.
Even if you’re just logging on to Zoom calls, remember that first impressions matter a lot. A well-lit, crystal clear image helps you stand out. It’s also professional in a way that’s especially important if you work in media.
Broste shares her gear list, including some items she typically leaves at home.
Filmmaker and host Juliana “Traveling Jules” Broste shared tips for creating “Your Studio On-the-Go” at the 2023 NAB Show New York. Watch her full presentation above.
What to Bring
“I have different size kits for different applications,” Broste says. She also notes that what she brings will depend on her mode of travel; she’ll bring more gear if she’s going by car than she will when flying, for example.
When creating her studio, Broste says, “I use, for example …my Canon EOS R. I have a tripod. I set up a teleprompter… in front of the lens. And underneath the lens you’ll see a little output, a 12 inch monitor, which is connected to my camera.” You’ll also need the “cable to attach [your devices], whether it’s USB-C or HDMI.”
Also, make sure “you have a clean HDMI out so that the picture in the Zoom call or the picture that you’re recording is not going to have words and numbers and F stop and ISO and all that weird focus box around your face.”
Some cameras, Broste says, have software “where you plug it in and it just works,” but if that’s not the case, she recommends “cam link by Elgato, something like that, like a dongle can easily make your camera now compatible with your computer.”
Whether or not you’re going with a single-camera set up, Broste recommends the HA10 Mini Pro, which she says “is an amazing switcher.” She adds, “It’s like a one stop shop. You plug it in and everything works.” Broste goes so far as to say, “If you’re on the move, that is the indispensable item.”
Remember Your First Impression
That’s a list of what Broste will use, even for Zoom calls, because “this type of setup just to give you that edge and especially if you have extra gear or you’re upgrading this is a good place to put it on to use,” Broste explains.
After all, “imagine that first impression when you get on the Zoom call. And you are just crystal clear. And you are lit well, and exactly the job that you do in video production is represented in that one shot. It’s important, right? We want to represent ourselves first impressions matter.”
Alternatively, you may use your laptop and a webcam, and Broste says, “I definitely have a stand to make it a little higher so that the camera’s at eye level.”
Also, if you’re using a laptop, “a second screen can be helpful. It makes you feel like you’re at home with that extra monitor.”
One use for that second screen? A teleprompter app. Broste uses Teleprompter Plus, or she also has the Prompter People 10-inch and a Lilliput monitor.
TravelingJules in action
TravelingJules in action
Other Best Practices
And don’t forget your sound. “I definitely recommend investing in audio,” Broste says.
After all, “if you sound crappy, you can’t hear anything. What’s the point?”
Keep in mind, “not all camera microphones go directly into the computer.” But, she says, one TRS cable “has three rings so that it has headphone capability, the other one has only two rings” and can help your set up.
If you’re using a camera, consider that “cold shoe mounts are going to give you a really good flexibility to have both a microphone and a light.”
Also useful, is a Wi Fi remote. Broste considers hers to be “an indispensable tool that I use with my Canon cameras.” You can also use your phone to connect to your camera, even changing exposure or to preview your shot.
“Make sure you have continuous power. I learned that it’s actually two things. It’s a AC adapter and a coupler. The coupler looks like a battery goes in the camera, but it has a cord and that powers it,” she says.
Broste adds that she relies on “the Anker. It’s quite a large but very portable power bank,” which she likes enough that she says she “stopped bringing any other charges except this one because it always saves me when I’m in a pickle.”
She recommends some lightweight stands, which can be either aluminum or carbon fiber, noting that some are more travel-friendly than others and some also have more features to recommend them.
What to (Maybe) Skip
“I, personally, try not to travel with lights,” she says.
Instead, Broste will “position my face towards the window” when recording. But that doesn’t mean you can’t bring lighting with you, and it definitely doesn’t mean you should ignore it.
She does, however, “really like this Manfrotto collapsible reversible background, which [is] kind of like a reflector,” Broste says.
“Also, you might need lights, fans,” she says, but “I try not to travel with these again, because there’s just so much extra stuff.”
Travel Best Practices
Broste also shared advice for traveling with your kit. “The on-the-go lifestyle, it starts right here when you’re on the road,” she says.
First, Broste recommends to invest in Apple Air Tags or similar tracking devices to keep a virtual eye on your bags.
Next, consider your connectivity if you’re traveling outside your home country. She recommend “that you have an international plan for your phone” or consider getting “local SIM cards because it can get expensive to stay connected.” If you do this and plan ahead, “you get off the plane and you already are connected.”
Also, keep in mind that “you have to be the mule and carry all this stuff. And then you get to benefit from the high quality production.” Whatever you pack, if you’re a solo traveler, you’ll have to carry — but then you also get to use it.
While she says, “I rarely travel with this roller cart. But that’s also very helpful” because “you don’t have to have all that weight on your back.” However, Broste emphasizes, “You have to make space for that.”
For her own travel, she says, “Most of the stuff I do is this, it’s [a] backpack. It’s pretty, pretty lean and mean, my backpack might weigh 30 pounds. But I fit it all into one backpack. And also, if you’re feeling like traveling with your gear is weighing you down, think about how that’s gonna feel after one hour, three hours, nine hours later, it’s going to get heavier, right? So it’s always best to only bring what you need.”
“Telling your own story, that’s the best part,” content creator Juliana Broste says. “You are in control of what elements are included, what elements are not, and how you want them to feel.”
November 12, 2023
Posted
November 12, 2023
Evan Shapiro: “What’s Next” for Media in the User-Centric Era — Part 1
Watch media universe cartographer Evan Shapiro’s keynote address, “What’s Next” at the 2023 NAB Show New York.
TL;DR
Media universe cartographer Evan Shapiro’s keynote address at NAB Show New York centered on the pivotal shift to a new user-centric era of media, unveiling new consumer research and urging industry adaptation.
Demonstrating the volatility of the media ecosystem, a Publishers Clearing House survey of 27,000 people in the US found that only 7% of users planned to stick with their current suite of subscriptions over the next year.
Shapiro discusses the “Hierarchy of Feeds” as a crucial adaptation strategy for media companies to meet the diverse and daily needs of consumers.
Highlighting the unexpected rise of local news, Shapiro underscores its significant role and potential for growth in the media landscape.
Shapiro spotlights the rapid growth of Free Ad-Supported Television (FAST), which is projected to reach a global market value of $20 billion by 2028.
Media universe cartographer Evan Shapiro commanded the Main Stage at NAB Show New York in October with a keynote address that urged industry professionals to embrace the inevitable: a new era where user choice dictates the flow of content and technology giants carve the path forward. With his customary wit, Shapiro unveiled fresh consumer research and a set of strategic imperatives designed to navigate the shifting currents of media consumption.
Evan Shapiro at NAB Show New York
Going beyond analysis, Shapiro’s presentation is a call to action, depicting a future that’s unfolding in real time. From the “Hierarchy of Feeds” to the new “Rules of Gravity” in a media world centered around the consumer, he provides a practical guide for industry adaptation.
Explore key highlights in NAB Amplify’s two-part report, and gain full access to Shapiro’s insights by watching the keynote address in the video at the top of the page.
The User-Centric Era of Media Is Already Here
The Media & Entertainment landscape is undergoing a profound transformation with consumers now at the helm, while tech giants diversify to deliver a “Hierarchy of Feeds” including “must-haves.” Shapiro, in his keynote, delineates this transition with strategic imperatives for navigating these changes as he urges industry professionals to acknowledge and adapt to the present realities of media consumption.
“I think there’s this misperception that we’re coming to what’s next, that what’s next is around the corner… maybe a few years off, and that’s absolutely untrue,” he says at head of his talk. “What’s next is already here.”
The gravitational pull of what Shapiro calls “big tech Death Stars” is reshaping the media universe. His two most recent media maps, sized by market share and communities, illustrate this point vividly. Companies must now operate in a domain where the rules are written by the likes of Amazon, Apple and Google — entities that command a significant share of global mobile users and advertising spend. At the same time, these big tech companies have ceded enormous power to users, who program personalized media bundles on a daily basis using just their thumbs.
Cr: Evan Shapiro/ESHAP
Addressing changing media consumption habits, Shapiro revealed a Publishers Clearing House survey of 27,000 people in the US, which found that only 7% of users planned to stick with their current suite of subscriptions. “Now, math is not my best topic,” he acknowledges. “But what I understand is that means that 97% of consumers are rethinking some or parts or all of their subscriptions that they have in their home on a month to month basis. Ready to switch out, ready to cancel.”
Cr: Evan Shapiro/ESHAP
Shapiro emphasizes “Hierarchy of Feeds” as critical for adaptation to the user-centric era of media. “This is the set of itches [consumers are] looking to scratch when they wake up every morning and pick up that first piece of glass.” Media companies, he says, must ask themselves, “Do I touch all these needs? If not, there are plenty of companies who do, and if not, consumers are going to be spending time with other forms of media while they’re not paying attention to you.”
The New York Times serves as a prime example of a legacy media company that successfully transitioned its business model from a print-centric approach to becoming a multimedia conglomerate. They achieved this by diversifying into “lifestyle bundles” that cater to a variety of consumer “must-haves,” ranging from games and sports to news, entertainment, food, video and television.
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Samsung has also adapted to cater to users. The company isn’t “just a manufacturer of televisions,” Shapiro notes, but also an operating system and channel business. “Google isn’t just a platform with video and audio, but also the maker of the fastest growing connected television operating system on the face of the earth. Amazon isn’t just Prime and free shipping and Twitch, but also the manufacturer of Fire,” he continues.
“You have to think about being everywhere your consumers are because you need to build your business around them, not make them fit into your business.”
Local News Isn’t Just Surviving, It’s Thriving
In an era where the digital transformation of media is often headlined by global platforms and streaming giants, Shapiro spotlights a surprising, yet pivotal player: local news. His analysis reveals a sector that’s not just surviving but thriving amid the media revolution, commanding a significant portion of screen time and audience attention.
As the number one use case for broadcast, “local is not just a segment, it’s a quarter of all TV time,” he points out. “Local urgent programming information that I can use in my daily life is going to be one of if not the most important part of the video economy in the United States and around the rest of the world for the next 10 years.”
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
The rise of local news isn’t just about numbers; it’s about relevance and the ability to meet the immediate needs of the community. Shapiro notes the fragmented but significant ways people access local content, from FAST news channels to station apps, and the urgent need among younger demographics for local information. “[For] two-thirds of consumers under the age of 45… local news is really important,” he says. “We all need our weather, our local school boards; these things matter more and more on a regular basis.”
Shapiro’s call to action for local media is clear: adapt and innovate. “If you work in local television, think about ‘what’s my website strategy? What’s my app strategy? What’s my FAST strategy, what’s my podcast strategy?’” he advises.
The shift in advertising dollars follows the audience, and local news is no exception. “The money is going to go where the money works,” he says, suggesting that local news can capitalize on this trend by understanding and leveraging the new metrics of media investment, such as cost per activation and video completion.
The Future of FAST
Free Ad-Supported Television (FAST) is staking its claim in the media landscape, with a growth trajectory that commands attention. Shapiro underscores its significance, noting, “FAST is the fastest-growing segment of the video economy.” This trend transcends borders, with the UK, Austria, Brazil, and Germany among the countries riding the FAST wave.
Platforms such as Samsung TV Plus, Roku, and Pluto TV have seen their monthly active users skyrocket, yet Shapiro urges industry professionals to view this data within the broader market perspective. He projects that by 2028 the FAST industry could be worth between $14 to $20 billion dollars worldwide. But while these are impressive numbers, they still pale in comparison to behemoths like YouTube, which is on track to earn a whopping $34 billion this year.
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
The data reveals a volatile subscription landscape, with premium ASVODs gaining and losing subscribers at a comparable pace. Shapiro interprets this as a potential pivot point for revenue strategies. Even Netflix is branching into advertising, he says, signaling an industry-wide shift towards a hybrid revenue model that combines subscriptions with ads.
While FAST is a crucial piece of the puzzle, Shapiro says, it’s not the sole answer to a media company’s business model challenges. “Yes, FAST is great,” he says. “Yes, FAST is important. Yes, you should be looking at FAST as part of the continuum you’re making out there. But if you’re resting all of your future laurels on this one format, and thinking it’s going to save your business in and of itself and replace all the revenue you’re losing from all these other traditional platforms, not so much.”
Evan Shapiro presents his “7 Rules of Gravity” with actionable steps for building a sustainable media business and thriving in this new era.
November 12, 2023
Posted
November 12, 2023
Evan Shapiro: “What’s Next” for Media in the User-Centric Era — Part 2
Watch media universe cartographer Evan Shapiro’s keynote address, “What’s Next” at the 2023 NAB Show New York.
TL;DR
Evan Shapiro’s keynote address at NAB Show New York continues to dissect the user-centric era of media, focusing on the digital ad revolution and the essential “Rules of Gravity” for the M&E industry to successfully navigate the new landscape.
The streaming boom has led to a saturated market, and Shapiro highlights the challenges of subscription churn and the need for innovative business models to retain viewer engagement and ad revenue.
Shapiro’s analysis of industry data reveals a seismic shift in ad revenue, with digital and CTV ad spending projected to reach nearly $58 billion in 2023, challenging traditional TV’s market dominance.
The advertising paradigm is changing, with a significant portion of ad spend concentrated among a few tech giants and a move towards performance-based digital marketing, emphasizing the importance of ROMI and ROAS in media buying.
Shapiro concludes with his “7 Rules of Gravity,” advocating for integration, a symbiotic relationship between subscriptions and advertising, and the strategic importance of daily engagement, commerce, and diversity in leadership to thrive in the user-centric era.
Backed by new research and fresh market analysis, media universe cartographer Evan Shapiro’s keynote address at NAB Show New York charts a course for navigating the new user-centric era of media, where seismic shifts are rapidly reshaping the industry and the rate of change is constant.
Part 1 of NAB Amplify’s two-part report examines the profound transformation of the Media & Entertainment landscape, from evolving consumption habits to fulfilling consumers’ “Hierarchy of Feeds” as a strategy for thriving in the new era. Part 2 continues the exploration, delving into the digital ad revolution and the pivotal “Rules of Gravity” that can help companies redefine their business strategies.
Explore the key highlights detailed below, and gain full access to Shapiro’s insights by watching the keynote address in the video at the top of the page.
Streaming Ascendant: Growth and Challenges
The streaming sector is experiencing an unprecedented boom, reshaping the M&E landscape with its rapid growth and the challenges that accompany it. As streaming services proliferate, they face the dual challenge of saturating the market while striving to maintain and grow their subscriber bases.
The pandemic, Shapiro notes, served as a catalyst for an unprecedented surge in connected television (CTV) sales and subscriptions, leading to a scenario where “more people have more intelligent televisions than they’ve ever had in more rooms.” This proliferation of smart TVs has not only changed how consumers engage with content but has also raised the stakes for content providers to develop a comprehensive CTV strategy.
Contrary to the belief that younger audiences are averse to paying for content, Shapiro argues that they are discerning but willing to invest in premium experiences. The decision to pay hinges on content relevancy, the presence of exclusive originals, refresh rates, and the breadth of the content library. These factors are pivotal in attracting and retaining the younger demographic, who place a higher value on content quality and exclusivity than on cost.
Cr: Evan Shapiro/ESHAP
Shapiro emphasizes the consumer’s newfound power in the user-centric era, with the ability to fluidly navigate between various content offerings, including ad-supported video on demand (AVOD) and subscription-based video on demand (ASVOD), as well as video game platforms. This shift in consumption patterns demands a cohesive content delivery approach from providers.
One of the most pressing issues for streaming services is subscription churn. Shapiro sheds light on the industry’s churn rate, which has seen a significant increase. He explains that every four months, a quarter of all premium ASVOD subscriptions are canceled, a trend that reflects the consumers’ growing propensity for “serial churning” — a cycle of subscribing, binge-watching, and canceling.
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
“If people are churning out on this massive a basis on a regular month-to-month continuum, keeping the ad dollars in ecosystem is going to be difficult in and of itself. It’s not just a subscription problem; it is also an advertising problem.”
The solution, says Shapiro, is to change how streaming companies charge users for content. “They need to figure out ways that are different than just cancel or not cancel,” he counsels. “It doesn’t have to be a binary choice. What if, I don’t know, Pluto and Paramount+ were the same app? And that when you were done with Paramount, you stop paying but you’re still living inside the Paramount ecosystem, and they can still remarket to you? And instead of having to re-onboard the whole time over again, you just click back on for the paid [content]. What if you only pay when you watch, so you always are subscribed, but you’re only paying based on usage?”
Disregarding the need to change their business models will lead to failure, Shapiro admonishes. “Even Netflix is going to have a hard time over the next five years making their business work if they can’t grow their ad business,” he says. “And if they fall into this trap, their ad business will never work.”
Adapt or Perish: The New Metrics of Media Advertising
The advertising landscape within the media industry is undergoing a pivotal transformation, with digital platforms and Connected TV (CTV) rapidly ascending as the new titans of ad revenue. Shapiro’s analysis of the latest industry data highlights a critical juncture for media entities: evolve swiftly with new advertising trends or face decline.
“If you’re in the ad business It’s going to be an interesting time,” he says, explaining how the US just exited an 11-month decline in advertising but ad sales, while rising, still haven’t returned to pre-pandemic levels.
Digital video and connected TV (CTV) platforms commanded an already impressive $41.1 billion in 2021, soaring to nearly $58 billion in 2023. This steep upward trend in digital ad revenue is reshaping the traditional advertising paradigm, Shapiro says. In 2021, traditional TV held a dominant 62% share of the advertising market, but has now contracted to just 51%. In contrast, the market share for digital video and CTV has ballooned from 38% in 2021 to an impressive 49%, signaling a near equalization with traditional TV’s market presence.
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Ad spend is indeed on the rise, says Shapiro, pointing to a recent Google earnings call that reported a 12.5% increase in revenue for YouTube, “but it is not being distributed proportionally across the ecosystem the way it was pre-lockdown,” he cautions, “and it never will be again. It’s going to the big platforms. And it’s going to the places where the ad buyers know that their dollars work.”
Media buyers, he says, are moving out of more traditional upfront deals “into much more performance-based digital marketing” like CTV and digital. Emphasizing that “the money is going to go where the money works,” he points out that a staggering 60% of all ad spend in the United States is funneled to just three companies.
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
Cr: Evan Shapiro/ESHAP
One crucial point, Shapiro adds, is that consumers see creator-led social video as a quality equivalent to professionally-produced content. “More importantly, for your business, advertisers now see it the same way,” he says, easily moving ad budgets back and forth between these two ecosystems on a regular basis based on pricing performance and need case.
“As a provider of ad impressions [you] need to be able to demonstrate that their money isn’t being wasted when they spend it with you,” Shapiro advises, noting that more than half of advertisers, brands and agencies surveyed said return on media investment is the number one metric for determining media buys.
The key to thriving in this new advertising economy, Shapiro says, is understanding and leveraging the metrics that matter. Return on media investment (ROMI) and return on ad spend (ROAS) are becoming the primary metrics for media buying, he asserts. “This number is going to rise [at] every upfront forever, it’s never going to turn back around.”
Shapiro’s “7 Rules of Gravity for the User-Centric Era”
Shapiro concludes his keynote with the “7 Rules of Gravity” for the user-centric era, guiding principles for media entities navigating the new landscape where consumers dictate the orbit.
Cr: Evan Shapiro/ESHAP
Rule 1: Integration Over Isolation — Shapiro champions a unified media experience, where users control the convergence of video, audio, social, and games. “The user is the center of all of it,” he insists, advocating for a seamless integration of media services.
Rule 2: Subscription and Advertising Symbiosis — The second rule dismantles the notion that subscriptions and advertising are at odds. Shapiro argues for a complementary relationship where both models can coexist and bolster the other, providing a stable revenue mix.
Rule 3: Global Audience, Local Content — Shapiro highlights the importance of content that resonates locally while reaching globally, especially for the under-40 demographic that constitutes a majority worldwide.
Rule 4: Compete and Cooperate with Tech Giants — Media companies must navigate the delicate balance of both working with and competing against the tech behemoths. Shapiro advises learning from diversified companies like Amazon and Google, which offer bundled services for consumers and advertisers alike.
Rule 5: Daily Engagement is Must-Have — To be indispensable, Shapiro says, media must engage users daily. “Just because they use you today doesn’t mean you’re a must-have,” he cautions, pushing for consistent and compelling daily engagement.
Rule 6: Commerce Pumps the Heart of Media — Shapiro reminds us that commerce is the lifeblood of media, and integrating commerce into media strategies is not just an option but a necessity. “Commerce pumps the heart of media, it always has,” he states.
Rule 7: Representation at the Helm — Shapiro calls for diversity in media leadership, ensuring content reflects and resonates with a broad audience. “The media has too few people at the top from the communities it’s supposed to serve most,” he points out, stressing that a diverse array of voices in leadership positions is not just a moral imperative but a strategic one.
Shapiro’s parting message is one of urgency and action. He implores media companies to align with these principles swiftly, not only to survive but to lead in the user-centric era. The future favors those who place the user at the core of their strategy, who innovate in content, engagement, and commerce, and who act decisively. The era of user-centric media is not on the horizon — it’s here, and the time to adapt is now.
Media universe cartographer Evan Shapiro examines the pivotal shift to a user-centric era of media, supported by new consumer research.
November 12, 2023
“The Holdovers:” Alexander Payne and Kevin Tent on the Director-Editor Collaboration (and They Should Know)
Paul Giamatti and Dominic Sessa in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
TL;DR
Director-screenwriter-producer Alexander Payne and editor Kevin Tent, ACE reunite for their eighth feature film, “The Holdovers.”
A period film with Payne’s characteristic tragi-comic elements starring regular actor Paul Giamatti, the comedy-drama is generating awards buzz.
The film marks one of the few occasions where Payne has not worked from his own script, although Tent says this made no difference to his craft approach.
The film’s 1970 setting is evoked with needle drops of classic tracks by The Allman Brothers Band, The Temptations and The Swingle Singers, among others.
Longtime friends and collaborators, director-screenwriter-producer Alexander Payne and editor Kevin Tent, ACE reunite for their eighth feature film, comedy-drama The Holdovers, which has been generating awards buzz.
Set in 1970, The Holdovers tells the tale of Paul Hunham (Paul Giamatti), a curmudgeonly instructor at a New England prep school who remains on campus during Christmas break to babysit a handful of students with nowhere to go. He soon forms an unlikely bond with a brainy but damaged troublemaker, and also with the school’s head cook, a woman who just lost a son in the Vietnam War.
Since their first project together, Citizen Ruth in 1996, the duo has made Election, About Schmidt, Sideways, The Descendants (for which Tent was nominated for an Academy Award), Nebraska and Downsizing. Payne was Oscar nominated for adapting the screenplays for Election, Sideways and The Descendants (winning twice) and nominated as best director for Sideways, The Descendants and Nebraska.
Director of photography Eigil Bryld, actor Dominic Sessa and director Alexander Payne on the set of their film “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Director Alexander Payne and actor Dan Aid on the set of their film “The Holdovers,” a Focus Features release. Credit: Seacia Pavao
Director Alexander Payne and actors Paul Giamatti and Da’Vine Joy Randolph on the set of their film “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
A Character-Driven Period Drama
In keeping with these stories, The Holdovers is character-driven so don’t expect car chases, gunfights or explosions. “It is about people and the pain they carry in their lives and how opening oneself up to others around you can help relieve that pain and sometimes maybe even help you to grow,” describes The Rough Cut host Matt Feury, who talked with both Payne and Tent for the Avid-sponsored podcast.
Payne conceived the basic framework for the movie about a dozen years ago after watching a restoration of the 1935 French comedy Merlusse. About five years ago, he received a TV pilot out of the blue, which prompted him to call the writer, David Hemingson.
“I said, ‘Hey, you’ve written a great pilot. I don’t want to do it, but would you consider writing a story for me?’ That’s how it happened.”
The Holdovers is among the few occasions where Payne has not worked from his own script, although Tent says this made no difference to his craft approach.
“On The Descendants we really toned back the comedy because it felt a little forced, but here the tone kind of came prepackaged into the cutting room. Nothing ever seemed forced.”
Dominic Sessa, Paul Giamatti and Da’Vine Joy Randolph in director Alexander Payne’s “The Holdovers,” a Focus Features release.
Cr: Seacia Pavao
Paul Giamatti and Dominic Sessa in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Dominic Sessa and Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Da’Vine Joy Randolph, Dominic Sessa and Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Dominic Sessa and Da’Vine Joy Randolph in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Editing (Just Enough)
The Holdovers largely focuses on two or three main characters, which means that for an editor there aren’t a lot of places to hide when the director has shot long takes of dialogue and reaction.
“Sometimes it is tricky,” Tent agrees, “because Alexander gets amazing performances, but I think it is because he lets them take their time and find the lines properly.
“We try not to cut too much. It is a challenge to keep things moving, picking up the pace, but keeping the performances solid. We had some challenging scenes because we had a couple of fairly long talking scenes, and we’re trying to condense them as the film was evolving.”
He adds, “We tightened in a lot of the scenes to get to where the boys were leaving sooner. And we’re always doing that internally within scenes, dropping lines, that kind of stuff.
“But I think the screenplays really is so amazing. Just the reveal of Paul, as you dig deeper into Paul, you find out so late in the movie that he basically ran away from home, and then you find out that his dad beat them. Normally, people try to set all those things up right in the beginning, and I really appreciated the way things were slowly revealed here.”
Additionally, Tent tells A.Frame, “We’re pretty careful about not getting anything too sentimental or too sappy.”
How do they achieve that? Tent says, “It’s really a discipline in how we’re cutting the performances, I would say. So, if it doesn’t seem like it’s ringing true, then we would probably cut it out.”
The film’s 1970 setting is evoked with needle drops of classic tracks by The Allman Brothers Band, The Temptations and The Swingle Singers, among others.
“Mindy Elliot, our associate editor and assistant editor, started putting music in and then we work with music editor Richard Ford, who helped us with both score and needle drops,” Tent says.
“With needle drops you can’t get too committed to anything because it costs so much money and it’s just such a back and forth with [licensing]. But in the beginning on this movie, I couldn’t really hear the music in it. Mindy suggested putting in one of the Swingle Singers’ Christmas songs and that became something dramatic that we use a lot, which was great.”
Signature Dissolves
Tent also talked about the use of dissolves, a signature Payne-Tent storytelling technique.
“We use a lot of them in The Holdovers, but we’ve always used them. It’s been part of our film language all the way back to Citizen Ruth. There’s a couple of really interesting ones in The Holdovers. I think that actually people thought were mistakes at first, and we’re like, ‘No, we did that on purpose.’”
Tent explains to A.Frame, “[W]e’re doing the same things that we’ve always done. But people think the dissolves now are because we’re trying to make it a ’70s film, but not really. We always used them.”
But unlike Sideways, he notes, “there was not a lot of predesigned dissolves.” Instead, “they were all made up in post. But they’re very effective, and I think they smooth out cuts and stuff like that.”
Dominic Sessa, Da’Vine Joy Randolph and Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Da’Vine Joy Randolph in “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Da’Vine Joy Randolph and director Alexander Payne on the set of their film “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Dominic Sessa in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Seventies (Re)Immersion
With Jami Philbrick at Moviefone, Payne elaborates on the 1970s setting: “I don’t remember exactly the moment, but connecting the dots, I thought it would be neat for the movie, to just give it something special. Nebraska’s in black and white, which just gives it something a little special formally. I just thought, ‘Well, wouldn’t it give this movie something special if we make it look and sound like a movie made in 1970.’
“But what it did, especially as my first period film, was give us the idea that we’re pretending that we’re working in 1970 making a low-budget contemporary film at that point. I think that helped our sense of aesthetic, that the sets and the costumes look as lived in, grimy and old as they would’ve been had we been making just a low budget contemporary movie back then.”
He adds, “I always put a lot of thought into the movies in terms of what car the protagonist drives. It’s always an important thing to think about. It tells you as much about the character as their apartment does. The good ones, I think, were Paul Giamatti’s red Saab in Sideways. Then the best one is Matthew Broderick’s Ford Festiva, a little teeny tiny pathetic Ford Festiva in Election.”
Seventies movies were formative for the 62-year old filmmaker, as he recounts to Jake Coyle reporting for AP News. Payne screened several classics for crew and cast including The Graduate, The Last Detail, Paper Moon, Harold and Maude and Klute.
“We weren’t trying to consciously emulate the look and feel of any single one of those films but we all wanted to splash around in the films of our contemporaries, had we been making a movie then.
“My birthday parties, we’d go see Chinatown or One Flew Over the Cuckoo’s Nest. But that’s the period when I was a teenager and a sense of taste was being imprinted on me.
“And what I was told was a commercial American feature film. Now they’re considered art films or whatever, the last golden age. Well, you never know when you’re living in a golden age.”
If FX’s The Bear reminds you of a Martin Scorsese film, you won’t be surprised editor Joanna Naugle is a devotee and used his movies as references for the show.
November 12, 2023
How Creators Are Upgrading Their Workflows With AI
Watch “Go Pro with AI: New Pipelines Change the Game for Content Creators” from the 2023 NAB Show New York.
TL;DR
AI tools are available for language dubbing, photo editing, asset management and script generation, delivering professional quality outputs.
Elena Piech, creative producer at ZeroSpace, believes AI is not going to completely get rid of everything that creators do but it will enhance their work by making the process less time consuming.
Piech says YouTubers and influencers are already using AI to do their own video voiceovers, even in multiple languages.
AI tools are available for a host of production tasks including almost instant language dubbing, photo editing and asset management, and even script generation.
“My whole thesis is that AI is not taking your job,” says Elena Piech, creative producer at ZeroSpace, a metaverse lab and virtual production studio based in Brooklyn.
“Instead, it’s looking at how it can enhance your workday, make your workday more efficient, and give you the opportunity to do more of the creative decision making that you’d like.”
“We need to acknowledge that the film and photo landscape is changing, and that we can do things to optimize the way that we work,” she said.
R&D, she says, is at the core of what ZeroSpace does. “We look at a lot of different AI tools and see how we can apply them both to our internal projects, and our client projects.”
One example is using AI, such as the tools in Adobe photo editor Lightroom, to speed up the workflow around asset management.
“Let’s say you shoot a wedding, you have 5,000 images, and you need to narrow those 5,000 into your favorite 500,” she explains. “That process can take a while just to go through and mark up. You can use AI to help with some of that decision fatigue and speed up the process. You can upload your full set of photos and then you can change the parameters you want.”
You could, for instance, instruct the AI to ignore all blurry photos, or be more lenient and ask it to select those with minor blur. If there are duplicate images, say five pictures that look the same, you could request the software to select just the best one.
Professional creators can also upload images and have them immediately edited and tweaked according to their own personal style and taste.
“Things like edit, changing exposure, your contrast, highlights, your shadows, your whites and blacks. It’s up to you to make and build that preset,” she says. “Now you can use an AI tool to get that preset that’s based on you and your style.”
When it comes to automatic dubs, Piech talked about software from Speech Labs.
“Let’s say that I voice a video for my English-speaking audience, I can upload a few sample sentences onto their software and then it can translate that into different languages that sounds authentic and sounds like my voice,” she says.
“We work a lot with YouTubers and influencers and a lot of them [are] now not even doing their own voiceovers. They just have an algorithm that has their voice and it’s spewing it out.”
Another workflow shortcut is to use AI to generate mood boards rapidly, rather than spend hours searching and selecting from sites like Pinterest.
Creators are also using ChatGPT and other text-based generators to spin up email or sales copy for their videos.
She likes using Adobe and its AI image generator Firefly because of the company’s verified approach to copyright.
“They have a lot of copyright protection baked in,” Piech says. “So, for example, if I were to type in ‘Mickey Mouse in the desert,’ Adobe Firefly is not going to give me Mickey Mouse in the final image because they know that that’s a copyright problem for them. So even if it’s just for ideation for potential clients, you’re saving yourself from running the risk of potentially getting sued in the long term.”
Whether you’re excited by or wary of the potential for AI, you’re trying to parse how it will change work. Adobe’s Scott Belsky takes that on.
November 7, 2023
Posted
November 7, 2023
Are Virtual Anchors Heading to Your Local News Broadcast?
Watch “AI Virtual Humans in Broadcast News” from NAB Show New York.
TL;DR
Marc Scarpa, CEO and founder of Defiance Media, shares his company’s approach to creating a custom virtual news Anchor, Raxana, and how they built a bespoke virtual studio for around-the-clock broadcast news.
Rather than focus on “deepfakes” and “fake news,” Scarpa talked up the benefits of leveraging AI, along with human expertise, to accurately produce stories while reducing production times and costs.
Scarpa says live reporting, from sports or news events, and investigative and foreign journalists have nothing to fear from AI (yet).
Virtual humans are emerging as a game-changing phenomenon, especially in the news and the journalism category. In a fireside chat at NAB Show New York, entitled “AI Virtual Humans in Broadcast News,” Marc Scarpa, the CEO and Founder of Defiance Media, shared his company’s approach to creating a custom virtual news anchor, Raxana. He also said AI news anchors were primed and ready for local news network adoption. Watch the full discussion in the video at the top of the page.
“Local news is really in trouble and suffering in terms of just operational costs. I think that you are definitely going to see virtual humans being used on a local news level here in the States and certainly for just straight reporting,” he said. “AI can’t compare with live field reporting just now, but you can’t have a virtual news anchor and toss to a field reporter. So that’s totally possible. These are things that it’s just a matter of time.”
Marc Scarpa, CEO & Founder, Defiance Media
In a video clip Scarpa shared, Raxana explained who she is and how Defiance produces its news: “I’m an AI powered virtual twin news anchor for Defiance Daily. They are the company who powers my lifelike presence, which is based upon an actual human. Yes, there is a human version of me out there, go find her!” she urged.
“The way we select news stories we run is pretty traditional. Our human editorial director selects press releases from accredited sources and global breaking news outlets. From there, we use ChatGPT to shape our scripts, which helps our editorial director write our stories, making complex topics easier to understand. We utilize cloud-based editing software, so our team can be anywhere in the world creating captivating graphics and visuals that complement our storytelling,” she continued.
“In our news package, we also utilize our growing library of interviews with innovators and entrepreneurs. And as Getty editorial partners, we are able to source from their massive library which is available in real time. Sometimes, if I’m feeling fancy, I’ll even use Midjourney’s AI art software to give you as a taste of my artistic side.
“Imagine a future where the news is delivered by AI powered virtual twin anchors in over 30 languages in real time, all around the world, redefining the way we consume the news.”
Following the promo, Scarpa detailed how his company wanted to fully embrace AI and video news reporting. The business model is much the same as regular news. One difference is distribution efficiency. The AI model allows Defiance to take the same package and translate it in up to about 45 different languages immediately.
“So in essence, we can have 45 different sponsors, based upon language around the same news package.”
We learn that the real Raxana is a model originally from Kazakhstan who now lives in Miami.
“We took her to a Bitcoin Conference in Miami and it was hilarious, because people would come up to her and be like, ‘We watch your news program every day?’ They just thought it was a woman in a virtual studio as opposed to a virtual human in a virtual studio.”
However, the model for Raxana had her likeness “bought out” by Defiance for her digital likeness to be used. “We were very clear on what the use case was. I can’t take her license, for example, and use it in a feature film if I wanted to, unless I use her in a future film as Raxana, the broadcast journalist,” Scarpa said.
In Hollywood and the media industry in general there’s a lot of fear around AI taking jobs. Defiance’s workflow though is currently very much a balance between automation and human.
“AI is your friend because, ultimately, it’s saving a ton of time,” he says. “We’re really utilizing AI across the board. It’s not just the virtual human element. And yes, there is a human who’s our editorial director, and she ultimately curates what stories we decide to run.”
However, Scarpa does imagine a near future where conversational AI with virtual humans is commonplace. In broadcast news, he see’s AI’s role as more behind the scenes, performing the edits, the program wrappers, and the scripts.
“It’s all pretty formatted anyway. Why waste hours of a talent’s valuable time when they can spend hours more working on stories and preparing for interviews?” he asks. “It doesn’t mean that camera crews going away, it just means that you’re freeing up that studio time to be able to do something else that would be more productive and more, intelligent, if you will.”
As far as Scarpa is concerned, Defiance Media would not exist were it not for the speed and smarts of AI.
“We’re a global broadcast media outlet reaching 150 million households but we’re ultimately an independent media company. To operate a 24 hour broadcast studio, with actual humans and spending millions of dollars on systems integration and building out that studio, and then trying to find a host that can speak 45 languages — it’s just not possible. So AI really opens up your world in a whole new way,” he explained.
“We’re not doing investigative journalism. I have a huge amount of respect for investigative journalists, especially the ones that right now are on the front lines in the wars are happening in this world. Those people live and die to get us some version of the truth. And I think that the craft of investigative news journalism is highly underrated and very under appreciated,” he added.
“We’re not doing that. We’re just focusing on innovation.”
What ABC News Means About Its Media (Technology) Trifecta
Watch “Technology Decision-Making in an Evolving News & Media Landscape” from NAB Show New York.
TL;DR
ABC News executive director of media & technologyFabian Westerwelle heads an NAB Show New York discussion on “Technology Decision-Making in an Evolving News & Media Landscape.”
Westerwelle also discusses the launch of FAST channel ABC News Live.
LTN’s CTO Brad Wall highlights the growing adoption of IP-based technologies with the evolving content opportunities for news outlets.
For broadcast networks like ABC the ongoing technology challenge requires technology, consumer product and editorial groups working together supported by deeply embedded IP and cloud vendors.
“We have to shift our thinking to designing that process from the top down, beginning with asking ‘who are our customers?’” outlined Fabian Westerwelle, Executive Director, Media & Technology, ABC News, at NAB Show New York in a session entitled “Technology Decision-Making in an Evolving News & Media Landscape.” Watch the full discussion in the video at the top of the page.
“You have to ask, how do they like to consume news? And how do we create that experience for them across all those different touchpoints?” the executive posited.
“So that means you have to establish what I like to think of as a trifecta. This is the concept where you have to really put the people that are dealing with the customer product right next to the folks that are doing production and editorial work, and the technology and the operations.
“Because if you don’t have all those three pieces together, working hand in hand, you’re not going to realize the proper experience. You then build your content strategy from there and make the right technology decisions.”
Fabian Westerwelle, Executive Director, Media & Technology, ABC News.
Westerwelle heads up the internal product team for ABC News. “We look at our workflows across the entire media ecosystem that we have to support and set the technology strategy around those pieces from a workflow perspective,” he explained. “We also have a large part of our group that’s running our streaming operations.
He also talked about the broadcaster’s entrée into FAST channels with its by now well-established brand ABC News Live, which launched in 2018.
FAST channels are a prime example of this technology strategy, he said. “A key piece is having that direct interaction and connection with content producers, bringing them along from the beginning of the whole process.
“But if we have that trifecta, then you can bring them along right from the beginning, have them help design it right, so that they are able to shift the way they produce content along the way. And that way, we can usually overcome some of the change management issues.”
Of ABC News Live, he said, “We’re really happy with it. It’s continuing to grow. I think FAST is an area where customers are really excited and interested and it serves a really interesting niche. It’s convenience at its finest, I think, especially in the news space. People are already used to watching the VOD on those devices and then the Smart TVs had all of that capability built in so putting a live channel in front of them there was a natural consumer touchpoint that we were able to take advantage of.”
Westerwelle also talked about the major transition over the past decade which was moving away from satellite for distribution and onto IP.
“Today, even on the distribution side, all of the news content that we distribute is all IP,” he said. “So the real focus for us is more about how to transition to a software based production environment, where we can truly realize the idea of utilize content for all these customer experiences. That requires transitioning a lot of legacy systems to more of a software layer.”
Brad Wall, Chie Technology Officer, LTN
Helping ABC News achieve this are key tech partners like LTN, whose CTO, Brad Wall, was also part of the session.
“Now customers are coming to us and asking us for the ability to do things in orchestration and business intelligence at the software layer,” Wall said. “Because if they’re already in our network the conversation moves onto ‘now can you create multiple versions of this potential show or this live sporting event and deliver this specific version to this specific taker like a Roku? We’re hearing the same thing from media companies.”
Westerwelle added that the partnership with LTN had created a media pipeline and set of workflows that are much more connected.
“The challenge, I think we’re all in right now is figuring out how to make that happen in a way where we’re all talking the same language,” he said. “The cloud providers are helping us step into that. As one example, AWS has taken a lot of strides in the last few years to really meet the needs of the broadcast and the television and content creation community. And LTN do the same thing. The most critical piece for us is that vendors are working with us on that together.”
Evan Shapiro and Justin Evans examine how data analytics and engagement are critical in the maturation of Free Ad-Supported Television (FAST)
November 5, 2023
“Fair Play:” How to Throw Your Audience Off Balance
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Courtesy of Netflix
TL;DR
“I am someone who writes my fears, and I was afraid that my career would cost me my relationship. So I wanted to write a movie about that,” says writer director Chloe Domont about her hit Netflix thriller, Fair Play.
The film addresses everyday questions around modern masculinity, mining a specific type of male dread that manifests itself in an obsession with being “alpha,” fueled by a thriving podcast and YouTube industry.
Domont hopes that her film raises questions about how the link between female empowerment and male fragility might be dismantled.
Chloe Domont’s thriller Fair Play provoked a Sundance bidding war that Netflix won for $20m and put the writer and director in the spotlight.
It’s the sort of triumph she is still wary of, in terms of how it impacts her own relationships, and was built on strength through adversity in what calls the “toxic link” between female empowerment and male fragility.
“What I really want to explore with this film is why is it that a woman being big, makes a man feel small?” she told Maggie McGrath in a video conversation. “Like why are those two things so closely linked? And I think it’s a systemic societal problem. I think that it’s the way society raises boys to believe that masculinity is an identity and that they have to fit in the box, that success is a zero sum game. And it’s not.”
According to Moviemaker, the film mines a specific type of male dread that manifests itself in an obsession with being “alpha,” fueled by a thriving podcast and YouTube industry.
Fair Play tells the story of two young employees at a cutthroat hedge fund, desperate for promotions. They’re secretly engaged, because company policy prohibits interoffice relationships. But things get nasty when Emily (Bridgerton’s Phoebe Dynevor) begins to far outperform Luke (Alden Ehrenreich, Solo: A Star Wars Story).
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Courtesy of Netflix
Domont had earned a BFA Degree in Film & Television from New York University’s Tisch School of the Arts, and by 2017, was earning steady TV directing jobs on high-profile projects like HBO’s Ballers, CBS’s Clarice and Showtime’s Billions.
The idea for Fair Play was “burning inside of me” she told McGrath as a result of her own experiences. “I had this feeling as my career was starting to take off, where my successes didn’t feel like a win, [but] like a loss because of the kinds of men I was dating. These were men who adored me for my strengths, or my ambition, but at the same time, they still couldn’t help but feel threatened by the very same things that they adored me for because of the way that they were raised.
“It just made me realize how much hold these ingrained power dynamics still have over us. So I wanted to put that on screen and be as ruthless with it as … the nature of the subject matter itself.”
Domont’s favorite scene sums this feeling up and was the first thing that came to mind when she started writing the script. This is the scene when Emily gets a promotion but her first reaction isn’t excitement; it’s fear.
Fair Play. Phoebe Dynevor as Emily in “Fair Play.” Cr. Sergej Radovic / Courtesy of Netflix
“That walk home and the dread and the silence of when she gets in there to tell him about it. The way we shot it, too, is I wanted her to feel very small in the frame. So there’s a couple shots that are over his shoulder and he’s very dominant in the frame. She’s very small in the frame and looking away and afraid to even look at him. And I just felt like that encompassed what I was trying to explore.”
She elaborates on this with McGrath, saying that while there may be some progress in American corporate culture as a result of #metoo, she also feels there’s a slow erosion of women’s careers.
“It might not be blunt force trauma, but it’s a death of 1,000 paper cuts. This kind of bad behavior was ignored, and then normalized. And then the scary part is usually after that, it’s escalated. So that’s why it’s so important that those little tiny breadcrumbs you are constantly leaving. It’s like a snowball that it constantly builds.”
Domont hopes that her film raises questions about how the link between female empowerment and male fragility might be dismantled.
“How can we demystify the role that men are raised into thinking they’re supposed to fill? How can women embrace their successes without fearing that it’ll hurt them on some level? And how can we love and trust one another, in a world that’s so dependent on the very power dynamics that get in the way of that of that love, and trust and respect?”
“Fair Play,” behind the scenes: Phoebe Dynevor as Emily. Cr. Sergej Radovic / Courtesy of Netflix
Much of the tension in the on screen relationship is attributed to Domont’s work with editor Franklin Peterson, whose credits include episodes of Homecoming, Calls and Gaslit.
“Even talking to Menno [Mans], my DP, we were constantly reminding each other, ‘Pressure cooker, pressure cooker,’” Domont told Peter Tonguette of Cinemontage. “We wanted to build up this ballooning tension — this balloon that just keeps getting bigger and bigger, and you know it’s going to burst at some point, you just don’t know what or when. The idea was building up this tension of this couple who can’t escape each other, really.”
Peterson explained that his first thought was to start off the film like a straight drama: “You sell the characters as if they are a couple you are going to just really root for, and then you pull the rug out slowly from underneath them. [But] Chloe said, ‘No, I don’t want to do that. I want, from the very first shot, to keep you unsettled.’”
He also explained that test screenings were “wildly” helpful. “It’s an R-rated movie for people who want to see an R-rated movie about a toxic relationship, or are willing to go on this ride with us. Once you enter that realm, you’re now asking, ‘How do we make this the best version of that movie for that group of people?’”
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Courtesy of Netflix
One of the most difficult scenes Peterson cut was the final scene with Campbell and Emily. It carries with it a delicate balance between what the characters say vs. what they mean. That balance has to also contend with an overall tension keeping the audience unsure what will happen.
As he explained to Filmmaker, “The coverage isn’t complex but to modulate the performances, guide the pace, and accommodate new lines meant we went through dozens upon dozens of versions. We would test the movie with a version of the scene we thought worked, only to realize that while solving one issue, we created another. It’s an example of how the hardest editing work will never show on the screen.”
One thing that never changed was the story’s ending. Domont knew what she wanted to say, and was never tempted to let her characters off with a pat resolution.
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Sergej Radovic / Courtesy of Netflix
“I don’t write one word until I know what the ending is,” she told Moviemaker. “That ending is where the story and the genre come together, in one final punch.”
“It’s working within the thriller genre, which uses violence as a means to solve conflict,” says Domont. “So that was important.”
To watch Fair Play, you would think it was shot entirely in Manhattan, where the story takes place, taking over the city’s many real hedge-fund offices and overpriced apartments, restaurants and bars.
In fact, the production was based in Belgrade, Serbia — where Fair Play executive producers Rian Johnson and Ram Bergman had recently made much of Glass Onion: A Knives Out Mystery.
Domont told Moviemaker, “Ram was advocating for us building there, because he was like, ‘This is the way to build a set the way you want. This is a way to put the most amount of money on the screen. And the crews are excellent.’ So that was what we did. We built all the sets in Serbia. And then we shot all the exteriors in New York, because the movie does not work if you don’t shoot the exteriors in New York.”
She took full advantage of being able to have sets designed to her specifications.
“I intended for it to be kind of a claustrophobic film, in the sense that the characters are trapped between their home life and the workspace, and they go from one enclosed space to another and they can never escape each other,” she explains. “And because we’re in these same spaces for so long, I wanted to build them. And it was very important for me to build them. I had a very specific idea for how those spaces should be and feel, to feel claustrophobic in different ways.”
“Fair Play,” behind the scenes L to R: Alden Ehrenreich as Luke, Phoebe Dynevor as Emily Brandon Bassir as Dax. Cr. Sergej Radovic / Courtesy of Netflix
Per advises that good principles for filmmaking, like pacing, the rule of thirds, and color psychology all still apply for TikTok videos.
He also says that he loves gear, but it isn’t strictly necessary to make good content. The most important element is a good story. (The second most important element is being able to hear that story, so investing in a microphone is a good idea.)
“A lot of people waste time on their hooks. You have about 60 to 90 seconds, maybe two minutes, if you’re making a TikTok,” Adrian Per told the audience at B&H BILD Expo. (Watch his full talk, below.)
Specifically, he warns, “A lot of people waste time explaining who they are. Or if they’re selling a product. They’re telling you about the product immediately. … that’s not how you sell that. And that’s not how you sell yourself.”
Instead, “you want to get into the premise of your video immediately,” Per advises. For his format, that means telling the camera: “Today, we’re going to learn about sound design.” To save time but add detail, he will add a written description. For example, “This is how you do sound design for free. This is how you do sound design on a budget. This is how you do sound design for under $200.”
“Bam!” Per says. “If you’re interested, you’re gonna stay for a few more seconds.”
Regarding the “intro” and “follow and like” trends, Per says, “I promise you, nobody cares who you are at the beginning of the video. But maybe towards the end, they will.”
STRUCTURE AND PACING
Even in short-form video, “you still want to keep the basic principles of storytelling,” Per explains.
No matter your kind of content, Per says, “There are little tidbits and moments within your story or within your day that you can create tension. That’s what keeps us entertained. That’s what keeps us watching. We want to see you solve something.”
“It’s just really condensed,” Per says. “So it’s still a three-part story. It’s still a beginning, middle and end.”
“Don’t rush. You know, you still want to deliver a story,” he says. “You still want it to breathe.”
After all, “if people like your content, they’ll follow, and they’ll see more parts” if you run out of time to capture all your thoughts in one TikTok or Reel.
Remember, Per says, “You’re delivering a story, so you want people to hear the story.”
“Audio is really important,” he says. “You can watch a shitty quality video with good audio, and you’ll be in tune. But you can’t watch an 8k or a 4k video with shitty audio, there’s wind blowing in the background, you’re not going to watch it. And that’s just a fact.”
But “As long as it’s clear, you can use anything. I personally use these wireless lav microphones,” Per says. He adds, “I use my phone as my recorder a lot.”
Also, Per says, “Picking out your music is really, really important. I pick out my music first. Like I’ll go through my songs on my Spotify playlist and say, ‘Alright, I’m going to do it to this pacing.’”
He suggests, “Fitting your dialogue or your voiceover, in the pockets, and within the tempo of your music is important.” He explains, “It’s like a psychological secret that you can use as a tool to get people more tuned into your content.”
Also, Per says, “I want to hear what’s going on. If you’re at the beach, and I don’t hear the waves it feels weird, right? If I’m at a park and I don’t hear birds, it feels empty. These are things that weren’t really noticed. But they’re felt.”
To achieve that, he says, “I meticulously go through all of my sound design. You know, I have a whole folder of things that I’ve recorded on my phone, whether it’s waves, whether it’s me driving in my car, the sound of me chopping vegetables, I throw it in there, and I match it with my footage. And that’s just the free way to do it. I know there’s like subscriptions out there for stock sounds but they’re really expensive. I’ve downloaded sounds from YouTube.”
Adrian Per. Image courtesy of the creator
SHOTS AND LENSES
“A lot of these things, they’re felt,” Per explains. “They’re not really noticed or seen. But when a story is done, right, you’ll notice them because it feels… different.”
When creating short-form content, Per says, “I still keep in mind the rule of thirds, I still keep in mind, like the science behind my lens choice.”
Understanding “lenses, focal length, that’ll help a lot. OK, if you can afford it, if you have the access to multiple focal lengths, they can help,” he says.
“When I’m trying to deliver something that’s personal, or deliver something that I want you to really pay attention to. I’ll use a 35 to 50 millimeter because it blurs out the background,” Per says.
Conversely, “with a wide lens, it distorts things. It’s anxiety inducing. It can feel scary,” he says.
Also, “a formula I like to keep in mind: I’ll go from wide, extreme close up to medium to extreme close up to close up too wide. I like to give it variation in that something that will help your story and will keep your audience attention retained,” Per shares. After all, “I don’t want it to feel boring, right?”
Nonetheless, Per says, “I film over 50% of my content on my iPhone. There’s a lot of pickup scenes in my videos that you would never know that I shot on an iPhone. I just put on the cinematic mode and plug it in there. I color graded to match my cinema camera. And nobody will ever know.”
After all, he says, today, “everybody has a good quality camera on them. Whether it’s an Android, green bubbles, or an iPhone. I’m just kidding, Android quality is actually it, their cameras are actually better. I just don’t want to inconvenience my friends in the group chats.”
Ultimately, “You don’t need flashy editing, no tricks. You don’t need to [have] After Effects. Stories are told with just regular cuts in movies. If you know how to do it, that’s great. That’s awesome if it serves your story, even better.”
COLOR GRADING
Color grading is another nice-to-have for creators, as far as Per is concerned.
“Depending on the emotion, I will color grade to help that story, but it’s also not necessary, if you don’t know how to color grade,” Per says.
“If you’re talking about something somber or sad, desaturating your color or making it cooler will help tell that story,” he explains. On the other hand, “If it’s a hot, bright summer day … or if you’re making something happy, exciting, maybe you want to add some more saturation, maybe you want it to pop.”
In total, “Color grading [takes] probably 30 to 45 minutes” for Per, who notes, “I’ve made my presets for that for myself already.”
“On average, it’ll probably take six hours,” Per says, “for one piece of content, dammit. Sounds like a long time.”
“When it comes to my Sunday short films, it takes me about 15 to 30 minutes to write it. I don’t know. Yeah, I mean, it’s 90 seconds, right? So I try not to think too hard. And I’m confident in how I speak and how I deliver. So when it comes to writing, I kind of just talk things out with the music.
“And that takes about 15 to 30 or so, filming it. I take anywhere from 90 minutes to four hours, sometimes, filming a 90 second video, which sounds pretty insane. But I don’t go anywhere past four hours or so. I feel like that’s just overshooting and if I am taking over four hours just because I didn’t plan it as well as I should have. For the most part, it’s under two hours and with editing. It takes me about an hour, sometimes 30 to 45 Just because I look at my script.”
But Per says, “I try to deliver quality and value without sacrificing my day.”
Also, he advises, “the more times you do something over and over, the better you will get. So when you spend a bunch of time on one piece of content, and not put it out or try to perfect it I think, you know that time spent on perfecting that one thing by putting hours into it. I think it hurts you in the long run.”
Casey Neistat is most famous as a YouTuber, but that wasn’t his goal… his career “wasn’t an option” when he started creating videos.
November 9, 2023
Posted
November 5, 2023
With Documentaries, Deepfakes Can Be Used for… Good?
From the documentary “Another Body,” co-directed by Sophie Compton and Reuben Hamlyn
TL;DR
Deep fake technology is starting to be used in documentaries and for advocacy work.
Several filmmakers and human rights advocates told the International Documentary Association that they believe the generative AI can be used responsibly to help shield subjects’ identities and to creatively (and responsibly) tell true stories.
They also say that disclosure and watermarking are both crucial to building audience trust in synthetic media and avoiding the trap of “context collapse,” when content is excerpted from its point of origin.
Most of the time when you hear about deep fakes in the news, the reason is negative or at least controversial.
“Synthetic media is at the center of many of the most pressing conversations about the social and political uses of emerging tech,” International Documentary Association moderator Joshua Glick said in his introduction to a panel discussing the ethics of using deep fakes and generative for non-fiction media. Watch the full discussion, below.
GUIDELINES FOR GOOD
“These tools can be very useful, but only following certain guidelines around dignity, transparency, and consent of the individuals,” human rights lawyer and Witness documentarian Raquel Vazquez Llorente explained. “So any process that uses AI for identity protection, shall always have careful human curation and oversight along with a deep understanding of the community and the audience it serves.”
Although it may be counterintuitive at first, Vazquez Llorente says deep fake tools can be deployed in “ways that could humanize the subject,” in addition to lowering the lift for filmmakers who want to protect them.
“There was also something… very powerful about this capacity to reclaim this technology. And we strongly believe that technology itself is not the problem. It’s the application. It’s the cultural conditions around the way that it’s able to be used,” explains filmmaker Sophie Comption.
From the documentary “Another Body,” co-directed by Sophie Compton and Reuben Hamlyn
Also, Violeta Ayala told the audience, “I ask you to not wait until a government will legislate these things because we can even trust those governments and how far they’re gonna go. So we need to start thinking and talking and put these ideas and these questions and these possible guidelines that maybe then we can push forward and understand that we’re not coming here to say no, no, no, no, I don’t like these.”
Additionally, Vazquez Llorente says, they advocate for “disclosing of the editing and the manipulation.” Meaning “any AI output, we firmly believe it should be clearly labeled and watermark considered, for instance, including metadata, or invisible fingerprints that allow us to track the provenance of media.”
For example, “voice cloning is one of those that is very difficult for an audience to tell if there’s been any kind of a manipulation. So how are we disclosing that modification to the audience?”
They must, Vazquez Llorente says, fight the problem of “context collapse,” which is common across the internet.
“We also have a responsibility to educate the audience… so that they will be more media literate going forward, and will understand its sort of uses when they encounter it,” says Reuben Hamlyn.
Francesca Panetta, creative director for MIT’s Center for Advanced Virtuality, and colleague Halsey Burgund collaborated on a documentary that engineered a deepfake Richard Nixon announcing the failure of the 1969 Apollo 11 mission.
CHALLENGES
“The use of AI to create or edit media should never undermine the credibility, the safety or the content that human rights organizations and journalists and the commutation groups are capturing,” says Vazquez Llorente.
Unfortunately, she admits, “the advent of synthetic media has made it easier to dismiss real footage.”
Hamlyn agrees: “We’re at a time where synthetic media technologies are dissolving trust in imagery. And as documentary filmmakers, you know, trust in the imagery is the foundation of our medium.”
That’s why disclosure is so crucial, but that can create challenges in and of itself. “You also run up against this issue where the disclosure [is] sort of complex and it can disrupt the sort of the emotional process of the film,” he suggests.
courtesy of Witness
courtesy of Witness
Then there’s the problem of inadvertently further “dehumanizing” subjects and using AI-generated “results that often enhance social, racial and gender biases and also produce visceral errors, right that sometimes they pick the form buddies. So it’s important also to keep in mind, I will preserve the dignity of the people we are trying to represent and that we are editing with AI. And the final question is, does the reformatting the resulting footage as I was mentioning, inadvertently, or directly reinforce these biases that already exist because of the data sets that have been fed into the generative AI models?”
Creators also must consider “whether the masking technique could be reversed and reveal the … real identity of an individual or their image,” she advises.
Identity Protection: We’ve all seen documentaries when witnesses share information from the shadows with their voices digitally altered. Generative AI could make the both tropes passé very soon by instead creating a new face for interviewees that would show their expressions while hiding their true identity, or utilize voice cloning to retain inflection while shielding their vocal signature.
Creative Advocacy and CTAs: PSAs might not need to feature real people or even actors.
Visualizing Testimonies: When done right, viewers may be able to use these tools to better understand and empathize with the plight of victims. Repurposing Footage: “You may capture in footage for certain purposes today, but then in few years time, maybe revisit or reclaim for other different purposes,” Vazquez Llorente suggests, explaining that you may need to anonymize participants for the new context, even if they agreed to be shown in the first.
We’ve been warned this day would come: Believable synthetic reanimations, also known as deepfakes, have entered the political arena.
November 6, 2023
Posted
November 5, 2023
Evan Shapiro Navigates the “Dark Room:” How Data and Engagement are Shaping the Future of FAST
Watch “The Televisioning of FAST“ from NAB Show New York.
TL;DR
Evan Shapiro and Justin Evans discuss the critical role of data analytics and audience engagement in the maturation of Free Ad-Supported Television (FAST).
Samsung TV Plus, Samsung’s own FAST platform, boasts an average viewer engagement of 110 minutes per day, outperforming the 34-minute average of linear networks.
The lack of unified, comprehensive data, termed the “dark room phenomenon” by Evans, is a significant challenge hindering the growth of the FAST industry.
As FAST matures, the focus is shifting toward more sophisticated monetization and audience targeting strategies, including a move from demographic-based to “psychographic”-based selling.
Shapiro and Evans agree that the FAST ecosystem has significant room for growth, especially in terms of content discoverability and global expansion opportunities.
Free Ad-Supported Television (FAST) is rapidly reshaping the media landscape, but how well do we really understand its potential? Media universe cartographer Evan Shapiro and Justin Evans, global head of analytics & insights at Samsung Ads, examines how data analytics and audience engagement are critical factors steering FAST toward maturity in a fireside chat that goes well beyond the hype.
From debunking myths to highlighting untapped opportunities, the conversation provides a masterclass in understanding the complexities of FAST. Watch “The Televisioning of FAST“ in the video at the top of the page.
The Evolution of FAST: More Than Just Reruns
In just a few short years, FAST has gone from a fledgling concept to a burgeoning industry. Evans paints a vivid picture of this growth, recalling his early days at Samsung Ads. “My team was four-and-a-half people around a card table,” he says. Four years later, the analytics group alone has expanded to 50 people globally across five continents. “So that’s an indicator of how the business has grown.”
Justin Evans, Global Head of Analytics & Insights, Samsung Ads
More than a platform for merely recycling library titles, FAST is evolving to include fresh content. One of the biggest myths the advertising community has about FAST, says Evans, “is that because we’re used to seeing these single-IP networks, we think of all of the FAST ecosystem as being simply, to use the 80s term, like reruns… and that’s not true.”
Samsung isn’t just a spectator in the FAST arena; it’s a key player with its own platform, Samsung TV Plus. Available as a preloaded app on all Samsung Smart TVs, Samsung TV Plus offers a linear television experience with upwards of 250 channels. The platform covers a wide range of categories, including single-title channels such as the Baywatch channel and the Great British Baking Show channel, as well as entertainment and lifestyle channels like Tastemade. With a reach extending to 24.6 million homes — about 38% of all US TV homes — Samsung TV Plus is a force to be reckoned with, as Shapiro observes, even outpacing giants like Comcast and Charter.
Evan Shapiro at NAB Show New York
Evans explains how Samsung’s strategy mirrors the moves of big tech companies that started with services and then diversified into hardware. “Samsung is effectively doing that the other way around,” he says. “We have, obviously, an enormous manufacturing, company business. And about nine years ago, they had the clever idea to start a services business. And what that means is we have a group of folks who make the operating system for the smart TVs. We have a team of folks who are licensed in bringing content into the ecosystem. And we have people who put ads on that content to monetize the attention.”
He revealed some eye-opening statistics during the fireside chat. On average, viewers engage with Samsung TV Plus for 110 minutes per day, a figure that dwarfs the 34-minute average of linear networks. This high level of engagement isn’t confined to a few top channels; it’s widespread across the platform’s diverse offerings.
The Data Gap: The “Dark Room Phenomenon”
In the ever-evolving landscape of FAST, one of the most pressing challenges is the lack of comprehensive, unified data. Shapiro points out the absence of a “single source of truth” in the industry, stating that data is often scattered and not easily accessible. This lack of clarity has led to what Evans calls the “dark room phenomenon.”
Evans elaborates, “The lack of data plays a really big part about why some of this is more obscure than it needs to be. And in fact, I would go even further and say the lack of data is a blocker to the business. It’s a bottleneck to the growth of the streaming business.” This “dark room” is a space that agencies and advertisers are hesitant to enter without adequate information, hindering the industry’s growth.
But the data gap doesn’t just affect advertisers; it’s a challenge for publishers as well. Evans discusses the publisher’s journey to convert viewers from “samplers” to “returners” and then to “loyalists.”
“We have solutions to identify people in each category and how to move them up the ladder,” he notes, emphasizing the importance of re-engagement strategies. “Re-engaging a ‘sampler’ at the right time with the right ad can effectively change the bounce rate from 62% to 14%,” he adds.
Third-party measurement companies also come into play. “They play an important role as an impartial referee in the industry,” Evans says, acknowledging the role of companies like Nielsen in providing some level of data standardization.
To help clients navigate this murky data landscape, Samsung Ads has developed a product called Audience Advisor. “It helps clients understand the streaming environment better,” Evans explains.
Both Shapiro and Evans agree that FAST offers better discoverability of new content compared to SVOD services. “FAST allows users to crash into new content,” Shapiro says, attributing this to the grid-like structure of FAST platforms.
The idea of creating a consortium of data from various platforms like Samsung, LG, and others is floated as a potential solution for the lack of a unified data source. “It could serve as a single source of truth,” Shapiro suggests.
“We did measurement and how often people come back and watch the app,” said Evans. “And then when they watch, how long do they watch for? And for me, this is a good indicator of engagement.”
The Road Ahead: Monetization and Audience Targeting in FAST
As the FAST ecosystem matures, the focus is shifting toward more sophisticated monetization and audience targeting strategies. Evans is particularly excited about the potential of content-oriented data. “Now that all of this content is digital, and we can read it from a data perspective, that also means it should be discoverable and almost innumerable ways,” he says. This opens up new avenues for monetization, allowing advertisers to tag and label experiences that can be sold.
But it’s not just about selling; it’s about selling smartly. Shapiro points out the shift from demographic-based selling to “psychographic”-based selling, allowing for a more nuanced understanding of the audience. “Solving the consumer’s conundrum about what to watch next is going to be a key factor,” Shapiro notes, emphasizing that platforms like Samsung are well-positioned to solve this problem due to their central role in living rooms.
The ability to monetize content through audience buying is another exciting frontier. “Now that there’s so much more scale in a FAST environment, each one of those networks can be contributing to reaching that 5-10 to 15% of the audience,” Evans explains. This not only increases the value proposition for advertisers but also opens up new monetization opportunities for networks.
One of the most striking revelations from Evans was about re-engaging samplers. “We’ve introduced solutions to re-engage samplers at the right time with the right ad, effectively changing the bounce rate from 62% to 14%,” he says. This is a game-changer in terms of increasing loyalty and, by extension, ad revenue.
Both Evans and Shapiro agree that there’s a lot of room for growth in the business, especially with more channels being curated into the system. Shapiro even hints at global expansion opportunities, mentioning that Germany is one of the fastest-growing FAST markets.
As the FAST ecosystem continues to evolve, the role of data in shaping its future cannot be overstated. But data alone isn’t the endgame; it’s the lens through which the industry can gain a clearer understanding of itself. “Right now, I’m focused on trying to contextualize the FAST experience for that advertising buyer universe,” Evans says. “That’s where I feel like there’s sort of this education gap. And I think the challenge there is: ‘What’s the perception — or just kind of the fuzziness — in the media and ad community around FAST?’”
FAST continues to mature rapidly, and it’s clear that data analytics will not only illuminate the “dark room” but also pave the way for innovative strategies in monetization and audience engagement — the untapped opportunities that lie ahead.
Media universe cartographer Evan Shapiro warns that the industry has entered new era, which companies will disregard at their own peril.
October 29, 2023
“Saw X” Cinematographer Nick Matthews Seeks Beauty in Brutality
“Saw X presented the dual challenge of needing to uphold the franchise’s established familiarity while also venturing to introduce the story in a new, thrilling way,” says director and “franchise stalwart” director Kevin Greutert.
He adds, “I also wanted to overturn and have fun with the tropes of the Saw series while taking care not to disappoint those who have long loved these movies.”
DP Nick Matthews brings the fun and a fresh perspective to the Mexico City installment, while honoring what Greutert calls the “sacred iconography” surrounding John Kramer and the Saw world.
“I think, ‘How do I take this character and craft something that walks you into that kind of space and into that sort of a world? How do I create shape and darkness within a space?’ For me, it’s about thinking in terms of deep background, midground and foreground, and then letting things fall off in a lot of places.”
“Saw X.” Photo Credit: Alexandro Bolaños Escamilla, Cr: Lionsgate
John Kramer’s Giallo Inspiration
Saw is “very much rooted in Italy’s giallo filmmaking,” Matthews explains. To achieve this look in Saw X, Matthews opted for a Sony Venice, which he rated at ISO 2,000 paired with Cooke Panchro /i Classic lenses and Pearlescent 1 filters, according to Jenkins.
Additionally, he sought to create the giallo look with practical lighting only. He told AC, “We start with whitish-blue fluorescents and golden tungsten lamps. Then we tumble into this dirty palette of sodium-vapor orange, arsenical greens, red emergency lighting, and ochre yellows, all built into industrial fixtures.”
For Saw X, Matthews explains, “I wanted to hark back to the dirtiness, the grittiness, the grime, the pervasive darkness, these poppy giallo colors of the early films, but I didn’t want to do it in a way that felt very heavily done in the DI; I wanted to do it with lighting. I wanted to create three-dimensional color spaces where you have these primary, secondary and tertiary colors populating the film.”
Matthews told No Film School that shooting in Mexico brought “a certain tonal palette to the movie.” But as DP, he says, “I’m trying to design a world that you can walk into and put a camera in and it’ll photograph well.”
Speaking about the colors, Matthews said, “I took the color palettes we used in Mexico and dramatized them for both the abductions because every character who ends up in these traps is abducted in these very giallo-like sequences. Anything that was going to become a tertiary color in the trap sequences I would use as the primary color first — a deep red, or rusty yellow, a sickening green, or a sodium-vapor color.”
“Saw X.” Photo by Alexandro Bolaños Escamilla, Cr: Lionsgate
Practical Mood Lighting
In addition to setting the mood, lighting was crucial to differentiating between the different challenges, known colloquially as “traps,” that John Kramer poses to those who have wronged him.
“By the time we got into the traps, we were shooting, easily, around 80 shots a day. So, we were lighting fairly 360” Matthews told Jenkins. Because of space and time constraints, “Everything was LED and dressed into practical housings so they looked like they were part of the set,” he says. “However, I’d typically bring a few small instruments out onto the floor like Arri SkyPanel S60-Cs or Asteras to wrap the light further or, in the case of the brain-surgery trap, to add an ochre yellow to the fill side of the character, which dirties up the image and the light.”
Shawnee Smith as Amanda Young in “Saw X.” Photo Credit: Alexandro Bolaños Escamilla
“Saw X.” Photo Credit: Alexandro Bolaños Escamilla, Cr: Lionsgate
Shawnee Smith as Amanda Young in “Saw X.” Photo Credit: Alexandro Bolaños Escamilla
Shawnee Smith as Amanda Young in “Saw X.” Photo Credit: Ivan Meza
An another example, which Matthews shared with Screen Rant’s Grant Hermanns, was “When the emergency lights go off, and we sort of have this big reveal towards the ends, just like, I really wanted the whole room to be bathed in red. And so I remember when I asked the production designer for 50, emergency-like spinners, and he’s like, ‘What the fuck, like, 50.’ And I’m like, ‘Yeah, I need like, one in every corner.’”
Because of the film’s complexity, Matthews told No Film School, “This movie had more interdepartmental conversations than any other project I’d worked on.”
He explains: “Once you get into a trap, you’re dealing with special effects, prosthetics, VFX, production, design, lighting, ultimately, and cinematography. And all of those elements have to come together, costumes as well.”
Paulette Hernandez as Valentina in “Saw X.” Photo Credit: Alexandro Bolaños Escamilla
In addition to complexity, timing was of the essence so that they did not have to reset any of the traps. Matthews recalls to Screen Rant, “All the brain trap stutter-frame stuff was shot on my Blackmagic [6k] just with me, like, you know, walking around while they’re shooting other pieces. And I’m doing it because we just have to move like lightning.”
To achieve the stutter frame look, Matthews employed a “lens whacking” technique, in which “you just connect the lens from the sensor plane, and you’re just kind of playing, which it’s a way to stimulate getting those like film rollouts that they got in the early days of like shooting film and had rollout and then they would use the rollout in the film. And so it’s like playing with light leaks and stuff like that.”
Futurist Peter Csathy says the WGA has been smart to agree to a time limit of three years in its new pact with the studios. This will allow the guild to survey the changing landscape and determine if contracts need to be updated.
Csathy thinks there are “compelling opportunities” for all players in the Media & Entertainment industry to leverage the power of AI, provided you do your homework and “Get a game plan.”
When it comes to AI, for futurist Peter Csathy, you have to get real: “I understand the fear, but we can’t put our heads in the sand. We need to look at things stoically.”
Csathy is considered a leading expert in Media & Entertainment and in particular where M&E meets future tech. In a new presentation, that you can watch below, he shares his thoughts on the current state of play of generative AI in the overall creative economy, highlights “compelling opportunities” to leverage its power for all players in the entertainment industry, and assesses the sobering risks it poses to artists within the entertainment ecosystem.
“I’m certainly no engineer, but I understand [tech] pretty deeply and I’m not afraid of it,” he says. “But with AI and with all new technologies, we need to be stoic about it, understand not only the possibilities, but also the risks and the impacts on life as we know it today and on the industry that we love so much.”
AI may be a mainstream topic in Hollywood, he says, but it is the Big Tech companies that will make the most money and have most control and power.
“Let’s look at the realities of economics. Big Tech has multitrillion-dollar valuations. Whereas the biggest media company out there, a traditional media company, which is Disney, has $150 billion market valuation. Ultimately, Big Tech is the big winner here. And I would say that Big Tech is the big winner on the backs of creators, artists, musicians.”
Certainly, creators, artists and musicians can learn to leverage AI for their benefit, but ultimately, “the scale of it all really inures to the benefit of Big Tech.”
That said, not even the CEOs of Microsoft, Google or Amazon know precisely how the sausage is made. “They don’t know precisely how a work is created [by generative AI]. They know generally how it’s created but they don’t know precisely how the ultimate output is achieved, when it comes to the black box of generative AI and the inputs that we put into it. Even the smartest minds developing the technology don’t know exactly how it does what it does, or where it’s going to be going.”
While that spreads inevitable confusion, uncertainty and fear, Csathy cautions that Media & Entertainment companies historically tend to “put their heads in the sand.” Ignoring AI is not a sound business strategy.
Cr: Peter Csathy
Cr: Peter Csathy
Cr: Peter Csathy
Cr: Peter Csathy
He advises CEOs to think about what Pixar did to traditional animation. “Before Pixar, Disney artists would be hand-framed, drawing each picture. Now, there’s a beautiful art to that. But imagine the length of time it takes to realize the vision of the film, while Pixar came in with computer generated animation and really disrupted and transformed the industry. Now, for some, it was not welcome because it disrupted their job as traditional animators but, on the flip side, it created an entirely new industry with new jobs,” he explains.
“I don’t want to minimize the human pain of that,” he adds. “It’s akin to what happened with factories on automation.”
Csathy suggests that governments are not equipped to create guardrails or regulation on AI, due to a lack of understanding and “demographic imbalances” in Congress. However, the biggest guardrail for Media & Entertainment companies using generative AI is existing copyright legislation, which in the US prohibits AI-generated works from receiving protection.
“While it’s daunting, just because it creates entirely new creative words doesn’t necessarily mean that it’s cannibalistic. I certainly believe that humans love engaging with cool content and experiences. There may be some cannibalizing because we have limited time in a day, but nonetheless, if I liked this AI generated work, I still may like the song I was listening to that is not AI generated. They’re not mutually exclusive.”
Cr: Peter Csathy
He imagines likeness and voice licensing opportunities for actors like Tom Cruise (“So you can imagine a case where Tom Cruise Mission Impossible 20 is in production, and Tom Cruise is on a beach sipping his margaritas,” while, the script, the actors, etc., are auto-generated) but this doesn’t address the fears of the 99% of talent without Cruise’s star power.
Of course, SAG has yet to agree terms with the studios, with AI royalties being a sticking point. Csathy says the WGA has been smart to agree to a time limit of three years in its new pact with the studios. This will allow the guild to survey the changing landscape and determine if contracts need to be updated.
“You have to learn to understand the language of AI, all of you no matter what role you play in the ecosystem of creativity, M&E, or technology. So you get it yourself. So you can speak the vernacular. So you have credibility. So you can work with other people and collaborate with them. It’s very important and follow developments closely,” he concludes.
“You got to create your game plan. Like I said, you can’t fear AI. This is the reality. This is where we are. Stoicism is key.”
The disruptive force of AI could be as transformative to democratic institutions as the printing press, argues policy expert Samuel Hammond.
October 29, 2023
How the Team From “The Killer” Sustains Its Style and Structure
“David Fincher told me this is a film about someone’s process, and the camera must be an objective ghost in the room,” says Erik Messerschimdt, ASC, about The Killer, his latest collaboration with the director.
After all, “I think it’s always interesting to watch somebody use their tools with great precision,” Fincher tells The Guardian.
“This is someone who never allows anyone to be close to them, but suddenly you are there – what does that feel like?” the cinematographer elaborated to Emily Murray at GamesRadar.
“I think sometimes we are looking for the bigger picture, the themes, but with this film you can take it as being about all sorts and just go for it – capitalism, nihilism, humanity, etc.
“But, for me, it was all about how you bring the audience to a place they are not used to being: close to this assassin.”
Based on the French graphic novel series of the same name, the film stars Michael Fassbender as the nameless assassin who goes on an international manhunt even if he continually insists to himself isn’t personal.
The Killer “is an eminently re-watchable revenge movie, morbid and sardonic and wickedly funny, the latter of which hasn’t been highlighted nearly enough in early press. Think John Wick, if Keanu Reeves was a sociopath with a penchant for bucket hats, Amazon and inadvertently xenophobic quips about Germans. Oh, and if he loved The Smiths,” sums up GQ’s Jack King.
A key text for Fincher and Messerschmidt was the 1967 crime thriller Le Samouraï from director Jean-Pierre Melville.
Fincher also encouraged Messerschmidt read the source material for The Killer, a French graphic novel by Alexis Nolent.
“I read it in French and I don’t speak French. But it was interesting, because I learned with graphic novels you don’t necessarily need the dialogue, and our film has so little dialogue too. It made me think a lot about composition and framing. We weren’t making Sin City, so it didn’t have to look like a comic, but we did use similar techniques when looking at where to put the camera, how close, etc.”
The lack of dialogue was something he was particularly drawn to, being excited at the prospect that the camera would really have to do the talking.
“All of my initial conversations with David weren’t about style, but instead pace and scene structure. We have a more nuts-and-bolts approach discussing how we are going to tell the story with the camera, then the style is born from that. We were always talking about point of view — when do we see what Fassbender’s killer sees and when do we watch him instead? How does that affect the interpretation of the scene?”
After working on several commercials and television shows, Messerschmidt ended up on the set of director Fincher’s Gone Girl, working as a gaffer.
The duo bonded, with Fincher then recruiting him as DoP on TV series Mindhunter and biographical drama Mank, for which Messerschmidt won the Academy Award for Best Cinematography.
The Killer was shot in Paris, the Dominican Republic and then Chicago and New Orleans. For Messerschmidt, the location scouting was a process of kind of natural discovery.
“We looked at the locations that Fincher had already seen and started to talk about the structure of the movie and it kind of evolved from that,” he told Deadline.
Editor Kirk Baxter told the same Deadline interview about his process: “It was a tricky movie to edit because it’s a straight line in terms of the story. I think people often talk about editorial when you’ve got timelines that are going back and forth, or six different characters that are weaving in and out but we’re following one person all the way through. And surprisingly, it made it actually harder for me, because it’s sort of a highly polished straight line.
“You’re working on the idea of a perfectionist, and he needs to be shown with precision and so it sort of just translates straight back to trying to heighten his world and to make things appear simple.”
“I think for this movie, because it is so much about precision and process, it has to cut so beautifully. It has to cut like butter,” Messerschmidt tells The Film Stage’s Nick Newman.
Unusually for a cinematographer, he says he doesn’t enjoy lighting. “I’m almost reluctant,” he admitted at the Mallorca International Film Festival. “I find that the thing that interests me most about cinematography is how and where we put the camera and how we use it to tell the story.
“The resulting imagery that we create, for me, has to come from that place. But I don’t, I don’t think of myself as a photographer. For me, it’s all about communication with the director about how we’re telling the story and how we’re cutting the cutting the scene apart into pieces that can then later be assembled.”
“[I]t’s a conversation about balance and color temperature and all the technical stuff. It was much more a conversation about camera direction and scene structure than it was about lighting,” Messerschmidt explains to The Film Stage.
“The film, as a whole, I think is really a conversation about subjectivity and point of view. We wanted to use the technique of putting the audience inside the Killer’s head and then stepping back and being objective, jumping back-and-forth with that. You see it in the movie with sound; you see it with picture,” Messerschmidt says.
Fincher himself has something of a reputation for being very exacting. He famously asks his actors to perform numerous takes to ensure perfection. Props, such as John Doe’s notebooks in Se7en, are meticulously crafted by hand.
What does Messerschimdt think? “I love it. On a David Fincher film set the decisions are immediate. Like, this shirt or these shoes, this color or that color. I think for David and I now [we have] a shorthand. We walk into a location, we end up standing in the same corner, and we look at each other and know intuitively what we are going to do.”
He adds to GamesRadar, “His reputation is that he’s a very controlling, detailed person and I think that’s terribly unfair. He is the most collaborative person I know and very interested in surrounding himself with people who bring something to the conversation.”
Fincher himself has told different interviewers contrasting things. He told GQ there was no resemblance (except a superficial one), while he also admitted to The Guardian that “There are certain parallels” between himself and the character of The Killer.
Messerschimdt admits that the director can also be “intimidating,” especially on set. “I told myself from the beginning to speak up if you disagree, but you have to pick your moments. I am more comfortable now in speaking my mind, but I also believe it’s a cinematographer’s job to conform to what the director responds to – you are trying to execute their vision.
“Luckily, we have enough shared sensibilities, which makes that easy – we walk onto a location, stand in the same place, look at each other, and nod. You don’t get that too often.”
The Killer deploys a soundtrack made of songs from 1980s British band The Smiths which is the music the protagonist listens to on his headphones while getting down to work.
Fincher told GQ: “I love the idea of a guy who has a mixtape to go and kill people. But if we have all of these disparate musical influences, are we missing an opportunity to see into this guy is? So The Smiths became a kind of stained glass window into who this guy was.”
Baxter tells Deadline, “It was David’s idea from the start for the audience to sort of live in the back of the killer’s skull, and we see what he sees, and we hear what he hears. So when it’s his POV, the music that he’s playing, takes over all the sonic space.”
Sound designer Ren Klyce elaborates, “When you see the film you will hear this voice of Michael Fassbender, his interior monologue and in fact, in the film itself, when he’s on camera, he barely speaks. He’s always speaking in his mind. And so that’s a very interesting set of circumstances because on one level you get to know him, but on another, you don’t really know him because it might be his thoughts that are to be trusted or not to be trusted.”
Fincher has been a staunch user of RED’s camera systems over the years, and this continues with The Killer, which marked his first use of their newest unit. “The RED V-RAPTOR [8K W and XL] addressed some color issues we had experienced in the past,” Messerschmidt reports to ICG, “plus it was small enough to go anywhere and was a good match with the KOMODO, which we also used. We also changed up by going 2.35, as scope seemed more appropriate given our location work and many of the shots featuring the killer and his prey together in the frame.”
A-Camera 1st AC Alex Scott was afforded several weeks of prep at Keslow Camera, readying ten cameras for Paris and shipping twelve cameras for plate views down to the Dominican Republic. “There was only limited second-unit work,” Scott also tells ICG. “For driving shots in the DR and a splinter unit for New Orleans [DP’d by Tucker Korte]. The plate cameras had full-frame Sigma Zooms. They determined the necessary angles, and then we’d measure things out with the corresponding vehicle and a stand-in on stage in prep so the plate camera could match our notes.”
The globetrotting scope of the film tracks with other contract killer stories.
As Messerschmidt notes: “The movie is told in chapters, with the character in a different locale each time, progressing from Paris to the DR, Florida, Chicago and New York City. Perhaps sixty or seventy percent of the interiors were shot on stage in New Orleans.” New Orleans also stood in for Florida, with St. Charles, Illinois filling in for N.Y.C. scenes. Shooting finished in L.A., where additional shooting was done months later.
“The tone and visual aesthetic was established and maintained on the set,” adds Messerschmidt. “We’ve had the same post supervisor and same colorist [Eric Weidt] for some eight years, and have developed a very streamlined color-management workflow on set: a single show LUT, no CDL’s, no LiveGrade. We monitored in HDR on-set with Sony 17-inch monitors and had HD dailies – editorial had HDR as well – in DCI-P3 and Dolby PQ Gamma.
“We had some abstract conversations about what these various parts of different countries looked like and felt like,” he elaborates. “David was emphatic that the audience experience each one as a discrete and different environment.
“To me, Paris always feels cool and blue, especially at night and even in summer. That cool shadow, yellow highlight look was a big part of the night work there, and it developed from stills we took while scouting.”
Messerschmidt explains to The Film Stage: “Paris, to me, always kind of feels like it has this split-tone quality to it: at night it has this coolness that’s contrased agains the bright, sodiumvapor sreetlights that central Paris is famous for.”
In contrast, for the Dominican Republic, he says, “I started thinking about what humid looks like and what Santo Domingo looks like. It’s… a cornucopia of various colors.”
Cinematographer Charlotte Bruus Christensen approached the FX series “like a seven-hour movie.”
October 28, 2023
As Broadcast and Cinema Workflows Converge, Everyone (Everyone!) Has Something to Learn
Watch “Decoding Broadcast: A Call to Filmmakers and Cinematographers Bridging Worlds” at NAB Show New York 2023.
TL;DR
Elements of broadcast and feature film are fusing as evidenced by the introduction of cine-style depth of field cameras into live sports.
Cinematographers are beginning to work within live broadcast although there are still huge differences in workflow, pacing and language they need to get to grips with.
It is thought that more cinema camera companies will open up their systems to allow for tighter broadcast integration.
The conventional wisdom is that live broadcast and cinema production will never meet, but the tools and the craft skills are beginning to blur. While there remain key cultural, equipment and workflow differences it seems as if there’s greater convergence ahead.
Mike Nichols, CEO at Surella, defines the core difference in terms of logistics and aesthetics.
“When you’re executing broadcast, it’s really about the nuts and bolts: Does this signal get to this truck? It’s not as if the broadcast mentality doesn’t care about image quality, but it’s not really the primary driving force behind the execution. Whereas in film, you’re approaching the job to look beautiful, cinematic, aesthetically pleasing.”
Nichols is a 20-year veteran of the production and production resource industry, including a dozen years at AbelCine, where he helped grow the company in the large format multicamera space.
This year he launched Surella, a production company with strategic partnerships in the live multi-cam and immersive market.
His contention is that elements of broadcast and feature film are fusing. Perhaps there is no greater evidence of this than the introduction of cine-style depth of field cameras into live sports. He also has some personal experience of projects that have crossed his desk recently where cinematic style Look Up Tables were being applied to multi-cam studio setups.
There remain huge differences, however, not least if you are a cinematographer or a broadcast TD looking to crossover into the other’s world.
“In the broadcast world, you just don’t have time for the nuances of color space. You have to plot out everything well in advance in a way that is very different than storyboarding a film with your director. It’s a completely different pace. And the language that’s used is pretty jarringly different.”
He reports that there are more and more cinema DPs being brought in to do live, multi camera broadcast-style shoots but they are finding it a culture shock if they’ve never been in that environment.
“The good thing is some of the tech producers, the engineers that work at the high level understand both worlds and do a really good job of holding the hands of those cinema DPs because if not it could be very jarring,” he says.
“[In live broadcast] you’re under such immense pressure to nail it, you don’t have a second take.”
He advises DPs with no experience of broadcast but keen to get involved to listen to the communications / talkback between camera-ops, technical directors, vision mixers, replay ops and the director calling a show.
“If you really want to understand the live environment just listening to the comms and hearing how the communication happens during a live show. It’ll blow your mind. The main focus is collaboration because if you don’t do that, you’re going to be spinning off in different directions and not in sync.”
Nicols also talks up the skills of the camera-operator working with cine gear like Sony FX cameras in a live environment. It’s not easy to get the focus right in a split second.
“You’re pulling focus but you’re not on a 30-inch lens where you can kind of snap right to it. You’ve got to find it and it’s changing the way operators are approaching their work. It’s changing the way directors are calling shows because they know they have to give an extra beat when they cue up the next camera in their preview in the multi-cam. They know to give their operator that split second extra to find that focus. Because as beautiful as the cinema lenses are, the shallow depth of field is a lot more challenging for the operators.”
All in all, he thinks it’s easier to integrate the cinema workflow into the broadcast environment than it is to bring a broadcast environment into a cine workflow where DPs have to command bringing in more crew, a Steadicam op and focus puller, plus additional camera assistants.
In a recent job by Surella, Nicols reports shooting a concert at Radio City Music Hall using the same cameras, workflow and team for the concert as for the feed produced for the giant screen IMAG.
“It was great because it gave us this freedom to do this thing that really isn’t typically done where we we’re producing the IMAG but we we’re also cutting a concert film. And people really responded well to seeing what looked like a concert film on the big screens as opposed to just every shot being just of the talent. It really worked. I wouldn’t be surprised if we start to see more IMAG and concert film merging.”
Camera and videos systems developers like Blackmagic Design are already targeting crossover kit for this space. RED Digital Cinema has also recently developed cameras and systems aimed at live (notably live VR).
“I think there’s going to be more of a call to action for the companies that consider themselves more street level in terms of the market to be more active in this space. Blackmagic’s entire ecosystem is great. RED is sort of seeing the value in creating their own next tier ecosystem of multi-cam. But I think more and more the cinema companies are going to see that they need to open up their systems to allow for tighter broadcast integration.”
Viva “Cassandro”! Cinematographer Matias Penachino Steps Into the Ring
Gael García Bernal in “Cassandro.” Cr: Amazon Content Services
To plan a biopic with a pre-existing, ready-made documentary (“Cassandro, the Exotico!”) for ideas and inspirations from seems magnifico. But for “Cassandro,” cinematographer Matias Penachino and his director and co-writer, Academy Award-winner Roger Ross Williams, shied away from using it or, in Penachino’s case, even viewing it.
Saúl Armendáriz (played by Gael García Bernal) is a gay amateur wrestler from El Paso who rises to international stardom after he creates the character Cassandro, the “Liberace of Lucha Libre.” In the process, he upends the macho wrestling world and his own life. The movie is based on a true story.
Gael García Bernal in “Cassandro.” Photo: Alejandro Lopez Pineda Cr: Amazon Content Services
Gael García Bernal in “Cassandro.” Cr: Amazon Content Services
Gael García Bernal in “Cassandro.” Cr: Amazon Content Services
“When I told my friends that I was going to make this film, they all knew about Cassandro because of this documentary. They were happy for me as the documentary was so good that a movie about the same person, they thought, would be as good. I didn’t want to watch it.
“I just didn’t want so many references. When you’re so involved in making a biopic, you don’t want to be so engaged in the actual character that you don’t make something special,” Penachino says.
Penachino instead sought inspiration and cinematic references from a different visual genre altogether — Mexican street photography.
Photographic Influence
“I wanted to change the idea of looking back at films as a reference to making a film. For this film, I preferred basing the cinematography in still photography. In particular, documentary photography instead of cinema.
“I went to Roger with a lot of ideas for street photography and their unique take on the world. They can tell you a story with just one frame. With Roger being a documentary maker and this being a biopic, it was clear that the film should resemble a portrait.”
So as Penachino shifted his influence from movies to the street photography of Mexico, he widened his approach to photographers from Europe and North America, including photographers from the UK like Peter Dewhurst and Craig Atkinson.
“I came to Roger with books and other references, and he said yes. We then based the aesthetics of the movie on still photography. Also, the rules for the language of the film were based on still photography, like the zero-degree angle with the camera almost always at the character’s height,” he says.
This spectator or voyeur approach wasn’t handheld, as you might expect, but Matias didn’t want a documentary feel.
“With the director known for being a documentary maker, people would expect that style. In this way, we turned it around by minimizing the camera movement. The camera is static almost constantly; it does need to move when the characters move. But if they’re not moving, neither is the camera; it just inter-cuts without moving or just pans for coverage.”
Gael García Bernal and Roger Ross Williams in Cassandro. Photo: Alejandro Lopez Pineda Cr: Amazon Studios
Cassandro was director Roger Ross William’s first narrative movie, and Matias appreciated how he embraced the differences.
“I think the success of our partnership really came down to the meticulous work we put into pre-production, the exquisite production design, and a production process that was very mindful of the world we were portraying,” Penachino tells British Cinematographer.
“Going from a small crew of about three, including a camera person and sound, to working with around 70 people is quite an experience. You now have seven people behind the camera and the rest on set. But I wanted to make it comfortable for him by not moving the camera so much and letting things happen. So, in a way, it’s shot like a documentary with tools that would help that but as fiction.”
Gael García Bernal in
“Cassandro” Cr: Amazon Content Services
Roberta Colindrez and Gael García Bernal in “Cassandro,” photo by Alejandro Lopez Pineda, Cr: Amazon Content Services
Roberta Colindrez and Gael García Bernal in “Cassandro.” Photo Alejandro Lopez Pineda, Cr: Amazon Content Services
Roberta Colindrez in Cassandro. Photo: Courtesy of Cr: Amazon Content Services
Benito Antonio Martnez Ocasio and Gael García Bernal in
“Cassandro” Cr: Amazon Content Services
Raúl Castillo in “Cassandro.” Cr: Amazon Content Services
That’s where the comparison stops, as the shoot included an ARRI Mini LF with Panavision H Series lenses with an aspect ratio of 1.44:1. “That’s not far from 4:3, which again reminds us of a portrait by being more like a medium format image. The lenses are super soft with a tremendous round bokeh with excellent resolution; they were amazing, really amazing.”
Boxing movies of late have tried to reinvent the way fight scenes are shot, “Creed III” being an example. “Cassandro” wasn’t going to compete on that level.
“We had a couple of different types of shots, but it was clear from the beginning that this wasn’t a sports movie. The film is about a guy who comes out of the closet and reinvents himself in an ultra-macho country.
“The camerawork was designed to tell the story more than present fighting in any different way. We weren’t looking for that.”
“We strategically moved the camera only when absolutely necessary, emphasising a deliberate stillness that further underscored our choice to portray the character from an outsider’s perspective. When it came to capturing the intensity of the wrestling matches, we collaborated with Alberto Ojeda, the film’s Steadicam and camera operator,” Penachino tells British Cinematographer.
“Together, we planned camera movements and angles, investing time in rehearsals and learning directly from the wrestlers, with a special emphasis on Gael’s amazing performance. The primary objective was to stay intimately close to the action to keep the main character in focus while being faithful to the visual language of lucha libre.”
Gareth Edwards’ film was shot on a relatively small budget of $80 million, yet looks like a blockbuster that cost significantly more.
October 26, 2023
Posted
October 26, 2023
Motion Capture Makes a Move Into Mainstream
TL;DR
In a recent webinar, Performit examines the current state of mocap, emphasizing the importance of capturing high-quality RAW mocap data.
Motion capture may have brought to life the Na’vi in Avatar for multi-billion dollar success but creating realistic motion is always a challenge and perfect data is a myth. What’s more, no one outside of Marvel or James Cameron has the budget for the most high-end systems or the time to work with them.
As Jon Dalzell, co-founder of the British-based developer Performit Live, explains in a webinar, traditionally there are two ways to capture motion.
You can use generic motion capture libraries, which involves searching for assets and paying a royalty fee for use. You then would need to adjust every animation for cohesion and manually keyframe any missing motion.
Or you can originate a location-based shoot, which entails everything from sourcing talent to hiring technical support, while typically waiting for days for the capture data to be processed.
All techniques, including top-of-the-range studio-based models using advanced camera tracking and markers, generate imperfect or noisy data that needs to be cleaned up.
Foot slide, for example, is a common issue where an animated character’s feet appear to slide or glide across the floor, instead of having a firm, realistic contact. This problem occurs due to inaccurate capture or translation of motion data onto the character model. It can also result from inadequate synchronization between the captured motion data and the animation rig, due to imprecise calibration.
Facial capture technology in mocap studios involves tracking dots on the actor’s face using infrared light. Dalzell notes that the number of cameras and their proximity to the actor affect the accuracy of the subtle movements captured in 3D space.
Studios must procure specialized cameras to detect markers on the motion capture suits, converting physical movements into digital data. Commercially available camera systems can cost upwards of $250,000 — custom systems used by large studios likely cost even more.
Those suits are also expensive. They are embedded with sensors, crucial for capturing the essence of human motion. A $2,500 suit is considered cheap, with many camera-based options costing more than $15,000 each.
Alongside these, there’s the software to process and convert raw motion data for animators to work with.
“To capture your vision, you need to be able to create authentic motion,” Dalzell says. “Researching, organizing and producing motion capture takes time and money, often leaving you working against the clock.”
He claims the high attrition rate in the industry, with 90% of animators citing high stress and burnout, meaning there’s a need for more efficient and effective working processes and methods.
(That’s where Performit Live comes in. “Our platform digitally connects you seamlessly with highly skilled professional performers wearing our smart motion capture system enabling you to direct the performer through remote rehearsal, capturing the exact moves you need and downloaded to you in seconds.”)
“Wearable technology will have a renaissance with advancements in fabrics and electrical integration,” Dalzell says. “You will be able to capture motion data in any location without cables or limitations.
Wearable technology like Performit’s can store motion data locally and upload it to the cloud when a connection is available, “allowing for unique and nuanced captures of elite performers in their real environments.”
Performit reports that they are developing technology for multi-performer remote production.
Many virtual production setups rely on motion tracking to locate the camera, even when motion capture isn’t being for animation.
October 30, 2023
Posted
October 25, 2023
Mamma Mia, Here I Go Again (But With Volumetric Capture)
From “ABBA Voyage,” Cr: ABBA, ILM
TL;DR
ABBA Voyage has cracked the code for holographic concert experiences, making more than $150 million in ticket sales in the tour’s first 15 months.
The Swedish supergroup pulls off the nightly performances with the aid of ABBAtars, custom holograms created using advanced motion capture technology and the work of hundreds of artists.
The concert uses a pre-recorded performance of ABBA projected onto a transparent screen. A 10-piece band accompanying the virtual avatars, performs in real time, albeit remotely.
You’d have to be living under a rock not to know that ABBA is back — and have a heart of stone not to feel nostalgia for the innocence of their pop.
ABBA’s current single-city tour, Voyage, launched in May 2022 at a custom-built, 3,000-seat arena in London on the site of London 2012 Olympics.
“ABBA Voyage is one of the most expensive productions in music history, with a price tag of £140 million (about $175 million) before the first show opened in May 2022,” writes Bloomberg’s Lucas Shaw. However, he notes that after more than a year of daily performances, “[t]hat investment is starting to look like one of the savvier bets in modern music history.” In its initial 15 month run, Voyage “generated more than $150 million in sales and sold more than 1.5 million tickets.” That translates to about $2 million a week, with an average ticket price of about £85 ($105), Shaw reports.
How exactly, is the septuagenarian Swedish supergroup pulling this off? ABBAtars.
“As major legacy acts sell off their catalogs and look to retirement, holographic shows could provide a business model to ensure their music and performances live on forever,” observes David Vendrell for TheFutureParty.
Making the ABBAtars
In 2021, the band gave fans more details about how their virtual alter-egos were created.
Cr: ILM/ABBA
The “ABBAtars” were created by more than 100 digital artists and technicians from Industrial Light & Magic (ILM). Four of ILM’s five global studios are dedicated to the project, with anywhere from 500 to 1,000 artists working on it.
The foursome was filmed using motion capture as they performed a 22-song set over the course of five weeks. ILM then “de-aged” Benny, Björn, Agnetha, and Anni-Frid, taking them back to 1979.
Cr: ILM/ABBA
“They got on a stage in front of 160 cameras and almost as many genius [digital] artists, and performed every song in this show to perfection, capturing every mannerism, every emotion, the soul of their beings — so that becomes the great magic of this endeavor. It is not four people pretending to be ABBA: It is actually them,” producer Ludvig Andersson explained in a video posted on YouTube.
In a video posted by The Guardian, Ben Morris, ILM Creative Director, said, “We create ABBA in their prime. We are creating them as digital characters and will be using performance capture techniques to animate them, perform them, and make them look perfectly real.”
Cr: ILM/ABBA
Cr: ILM/ABBA
Cr: ILM/ABBA
Well, no, this isn’t the real thing, but there’s good reason for the digital reworking.
The global appetite among ABBA fans old and new to see them perform live would be overwhelming — a tour the septuagenarian multi-millionaires would see as too exhausting.
This way they can be Björn again and again and again.
Benny Andersson came up with the term ABBAtars a few years ago, with the original concept of creating holograms.
The actual digital concert experience, referred to as “Voyage,” uses a pre-recorded performance of ABBA in their mocap suits which, using an updated version of a Victorian theatre trick called “Pepper’s ghost,” projects them onto a transparent screen. A 10-piece band accompanying the virtual avatars, however, performs in real time, albeit remotely.
In a statement released by the band, they explain that “the main inspiration to record again comes from our involvement in creating the strangest and most spectacular concert you could ever dream of. We’re going to be able to sit back in an audience and watch our digital selves perform our songs.”
Cr: ILM/ABBA
Cr: ILM/ABBA
Cr: ILM/ABBA
“We simply call it ‘Voyage’ and we’re truly sailing in uncharted waters. With the help of our younger selves, we travel into the future. It’s not easy to explain but then it hasn’t been done before.”
Over at RedShark News, Andy Stout delves into the technique used to create the Abbatars:
“At its most basic form this technique uses an angled glass screen to capture the reflection of a brightly lit image off-stage somewhere,” Stout explains. “Pioneered by John Henry Pepper following work by Henry Dircks in the 1860s, it would allow Pepper to stage elaborate shows where actors interacted with a ‘ghost,’ simply another actor performing off-stage and lit in such a way as to make their reflection appear and disappear.”
The effect is widely used in theme parks, with Disneyland’s Haunted Mansion being the most famous example. The effect has also been used for a slew of virtual tours featuring Roy Orbison, Buddy Holly, Amy Winehouse, Ronnie James Dio and Frank Zappa, but ABBA’s use of the technique is more complex than these, Stout explains:
“The ABBA staging looks to be a bit more ambitious than that (you don’t employ 1,000 of ILM’s finest unless you’re serious about bridging the Uncanny Valley) which is why, instead of touring, it is taking place in a purpose-built 3,000 seat arena in London’s Olympic Park. That lets the team thoroughly control the projections, the lighting, the effects, and match all that with the physical performance of the musicians and dancers that will be on stage with Agnetha, Björn, Benny, and Anni-Frid.”
Futurist Bernard Marr looks ahead to the implications presented by the new technology. “While we know AI will be used to recreate a young-looking ABBA, we can speculate that the next step could potentially go even further by recreating something of their personalities and behavior,” he writes. “It isn’t a huge leap to imagine they could use language processing and voice recognition to respond to song requests from the audience and, perhaps one day, even hold a conversation.”
ABBA’s virtual concert series will allow thousands of fans enjoy the experience together, says Marr, who also notes that, at their age, the actual band members might find it tiring to perform eight shows a week.
“In turn, this makes it more accessible for the fans, who might find it more difficult to attend a live arena concert that only has one date,” Marr explains. “Likewise, with Ariana Grande’s Fortnite performance, fans could, in theory, access the spectacle from anywhere in the world. In addition, in the avatar-driven environment, fans could enjoy it alongside digital representations of their friends, heightening the sense of the experience.”
Many virtual production setups rely on motion tracking to locate the camera, even when motion capture isn’t being for animation.
October 25, 2023
Posted
October 24, 2023
Pretty/Scary: Cinematographer Aaron Morton on “No One Will Save You”
TL;DR
Hulu’s sci-fi horror hit “No One Will Save You” is virtually dialogue-free, but the filmmakers aimed not to draw attention to that device while still being aware of how much extra weight the visuals needed to carry.
The most challenging aspect of the shoot for cinematographer Aaron Morton was lighting the aliens with moving lights more commonly found at rock concerts.
Lighting of the color red in particular played into the production’s choice of Sony VENICE camera.
Home invasion movie No One Will Save You can be added to the low budget horror film renaissance (think Huesera: The Bone Woman, Barbarian, M3GAN, Talk To Me) and the most streamed film across all platforms in the US when it was released last month.
Made for $22.8 million, the Hulu original is an almost wordless thriller in which Brynn (played by Booksmart’s Kaitlyn Dever) is a young woman living alone as a seamstress in the countryside fights back against alien invaders.
“He didn’t tell me about the lack of dialogue before sending me the script,” says cinematographer Aaron Morton (Black Mirror: Bandersnatch; The Lord of the Rings: The Rings of Power) of receiving the project from writer-director Brian Duffield. “Reading it for the first time it dawns on you.
He adds, “It sounds counter-intuitive given the lack of dialogue, but the way the script conveyed the tension and terror that the character’s feel did so much work to help us understand the approach to this film.
“One phrase we had throughout prep was that horror can be beautiful. We tried to make a beautiful film that was scary.”
“No One Will Save You” Director Brian Duffield
Duffield, who wrote 2020 monster adventure Love and Monsters and 2020 sci-fi horror Underwater, is drawn to ideas that smash two things together that don’t necessarily go together.
“We talked a lot about if Todd Haynes was making Far From Heaven and if aliens invaded in the middle of it what that would feel like,” Duffield commented on the set of No One Will Save You in the video featurette above.
Kaitlyn Dever as Brynn Adams in “No One Will Save You,” directed by Brian Duffield. Cr: 20th Century Studios
Kaitlyn Dever as Brynn Adams in “No One Will Save You,” directed by Brian Duffield. Cr: 20th Century Studios
Kaitlyn Dever as Brynn Adams in “No One Will Save You,” directed by Brian Duffield. Cr: 20th Century Studios
It’s the kind of a thing he brings to a lot of his scripts, says Morton, a New Zealander who previously lensed a film for the director about teenagers spontaneously exploding.
“Spontaneous (2020) was really a lovely love story between two kids who happened to be in a situation where their friends are literally exploding next to them. Brian loves smashing two disparate situations together and seeing what comes of it.”
He continues, “What is clever about No One Will Save You is that while we are learning about what’s happening to the world in terms of the alien invasion we’re also being drip fed information about Brynn’s character and what has happened to her in her life.”
“No One Will Save You” Behind the Scenes
While the script has somewhat of a conventional narrative driver from set piece to set piece, what’s missing from more traditional film is coverage — the over the shoulder, reverse shot grammar of two people having a conversation.
“We’re still using filmmaking conventions but making sure we have an awareness all the time that we were ticking the boxes for the audience in terms of what they were learning about story and character,” Morton explains.
“We knew we didn’t want to treat the lack of dialogue as a gimmick. It was just a by-product of the situation that Brynn found herself in. Our aim was not to draw attention to that device in the movie while being very aware of how much extra weight the images needed to carry in a ‘show, don’t tell’ sort of way. The nature of the film relies on the pictures doing a little bit of extra work.”
“No One Will Save You” Clip | Telephone
The most challenging aspect of the shoot for Morton was lighting the aliens. “The light is part of how they control humans,” he says. “We definitely wanted to be reminded of Close Encounters which did inspire a lot of what we were doing in our movie.”
This includes leaning into the classic tractor beam trope of being sucked into the mothership. They were lighting some reasonably large areas of swamp and night exteriors of Brynn’s house, often using cranes with moving lights. These were powerful Proteus Maximus fixtures from Elation Lighting, more commonly found at rock concerts.
“The moving alien lighting is built into the exterior night lighting. For instance, when Brynn is walking up the road in a forest, the forest is lit and suddenly a beam is on her and she gets sucked up into spaceship. We had the camera on a crane, and we had ambient ‘forest’ lighting on a crane so I could move that ambient lighting with her as she walked. We had another crane with the big alien light that stops her in her tracks. So it was this ballet of things happening outside the frame.”
He continues, “I love using moving lights (combined with the right sensor) because you can be so accurate in changing the color temp by a few degrees, even putting in Gobos to change the beam size, using all the things moving lights are great for in live events and putting them into the cinema world.”
“No One Will Save You” Clip | Power Surge
The lighting design also gave the filmmakers another way to show the audience things that Brynn does not see. “She could leave a room and just as she turns away we play a light across a window just to remind people that the aliens are right there though she’s not aware of it.”
Since the color red plays a particularly important role in depicting the alien presence, Morton tested various cameras before selecting the Sony VENICE.
“You can quickly over-saturate red, green and blue colors with certain cameras, so it’s a big piece of puzzle that you can figure it out early. We felt the color science of the VENICE was incredible in terms of capturing that red.”
“No One Will Save You” Clip | Nails
They shot anamorphic using a set of Caldwell Chameleon Primes. “What I also like about the VENICE is that you can chop and change the sensor size. The Chameleons cover the large format size very well but not perfectly so it meant I could tailor which part of the sensor we were using depending on which lens we were on.”
Elaborating on this he says, “You can very easily go from a Super 35 size 4K sensor on the VENICE to 6K using the full horizontal width of the sensor so sometimes, if I needed a bit more width in the room in a certain situation and what we were framing was forgiving enough, I could use a wide lens in 6K mode and not worry about the distortion we were getting because we’re using every inch of the image circle.”
The production shot in New Orleans but is not location specific. “It’s Generica,” says Morton, who commends the Louisiana crew.
“We ran three camera bodies all the time with two full teams so we could be shooting with two and prepping another camera then leapfrogging one to the next.”
“No One Will Save You” Clip | Brynn in the Basement
Morton has somewhat of a horror film pedigree having photographed 2013’s Evil Dead remake, the “Metalhead” episode of Black Mirror (arguably the series’ bleakest story) and The First Omen, directed by Arkasha Stevenson for 20th Century Studios starring Bill Nighy, which is scheduled for imminent release.
He was in the middle of shooting a feature for Universal in Dublin when the strikes hit and still has three weeks of work on that project to complete, for which he is using a package of Alexa 35 and Cooke S4 sphericals.
“I love going job to job to approach with fresh eyes and be given a clean slate. Whilst I welcome giving my opinion on what approach would work I’d much rather have a two-way back and forth with a director about what they think so we can get the best story to screen.”
“No One Will Save You” Director Brian Duffield
Reaction
With a big full-orchestra score from Joseph Trapanese, “Duffield admirably trying to turn his Hulu budget into Amblin production value,” reviewed Benjamin Lee at The Guardian. “It’s got the feel of a real movie, the highest compliment one can give right now to a film designed for streaming.”
Forbes critic Mark Hughes said, “The film is expertly paced and gets endless mileage from its premise. It’s one part Signs, one part Body Snatchers, and a whole lot of Cronenberg, all of which I love independently and am thrilled to see combined here.”
He adds, “the largely dialogue-free approach might sound gimmicky or distracting, but it’s neither and highlights how well written, directed, and performed the whole thing is.”
Others were more negative. Here’s Sam Adams at Slate: “[It] could have been a spectacularly scary short film. Instead, it’s a movie that starts off with an incredibly strong premise and sense of itself, and then squanders nearly all of it in a scattershot middle and confounding conclusion.”
“It’s the idea of keeping the audience off balance,” says Khan. “Oh, is this a comedic scene or is this person actually going to get killed?”
October 28, 2023
Posted
October 24, 2023
Nahnatcka Khan’s ”Totally Killer” Has Everything: Horror, Comedy, Time Travel, and True Crime
Kiernan Shipka in “Totally Killer,” courtesy of Amazon Studios
TL;DR
Nahnatcka Khan’s “Totally Killer” combines comedy, horror, time travel and true crime tropes to create a fantastic mashup movie, now streaming on Amazon Prime.
It manages to combine genres without muddying the mood of scenes by relying on a strong cast and distinct tonal shifts.
The film references classic horror movies like “Halloween” and “Friday the 13th,” while leaning into Khan’s comedic prowess.
“Totally Killer” relies heavily on practical set pieces and choreography to create convincing scares and to ground emotion of key scenes.
Nahnatcka Khan’s “Totally Killer” is much more than your standard slasher flick.
“To the people who work in comedy, it feels like a tightrope walk whenever you’re doing it, you know, because you feel the life and death of it all, but nobody else does,” Khan tells KCRW’s Elvis Mitchell. “And so to actually be able to manifest that into a true movie, and story was really satisfying.”
And don’t forget time travel! Khan’s latest feature is actually “a classic slasher movie fused with Back to the Future,” writes Andrew Webster for The Verge. (He also compares the movie to “Yellowjackets” because of the shifting timeline and two sets of actors.)
But wait, there’s more! “On top of sending up beloved genres like slashers, ’80s teen comedies, and time-travel flicks, ‘Totally Killer’ also tackles a modern-day obsession: true crime,” observes Mashable’s Belen Edwards.
How exactly does this work?
“I like the idea of mashups,” Khan told Collider in a video interview. (Watch the full conversation, below.)
Ehrhart summarizes: “Amazon‘s glossiest, wildest new horror-comedy finds Jamie (Kiernan Shipka) reeling after her mother (Julie Bowen) is murdered by a masked menace known as the “Sweet Sixteen Killer.” Thirty-five years earlier, the same murderer went on a killing spree in her town, leaving her mom the sole survivor of a group of teen friends.
“Luckily, her best friend Amelia (Kelcey Mawema) just finished building a time machine for the science fair, which allows Jamie to travel back to when the first murders occurred and team up with her teenage mom (Olivia Holt) in an effort to save everyone.”
MOVIE REFERENCES AND COMPARISONS
Even as a genre-bender, “Totally Killer” is grounded by references and allusions to classic horror movies.
Although known for her comedy work, Khan is a horror fan, telling Rolling Stone she is a fan of “The Conjuring” universe, as well as the original “Halloween” and “Friday the 13th.”
Editor Jeremy Cohen tells 1428 Elm, “I rewatched ‘Scream’ and I watched ‘Halloween’ again and again” in preparation. The end result includes some “pretty explicit references to ‘Halloween,’” he says, adding “ Even the opening shot, with everyone trick or treating and crossing the street. There’s also a pull-out from the house, which is somewhat of an homage to Halloween. There’s even a part where the killer stabs someone and does a Michael Myers head nod.”
In addition to the visuals, Cohen says he “used a lot of temp score from ‘Halloween’ when putting this together and John Carpenter throughout. We also reference some of the music from ‘X,’ which has that sing-songy soundtrack by Chelsea Wolfe.”
Another nod to the horror genre is the Sweet 16 Killer’s mask.
The Sweet Sixteen Killers in “Totally Killer,” courtesy of Amazon Studios
“I think that mask is really crucial for a good slasher movie, especially for this one because I wanted it to feel original, but it had to feel of the time, like something that could exist back then.
“But also, I didn’t want the whole movie to feel retro and nostalgic because Jamie is from this present, she’s from the future, so you have this sort of Gen Z energy going into this John Hughes world,” Khan tells Taylor.
To craft the mask “of a handsome man being scary,” they worked with Tony Gardner and Alterian Inc., Khan says. To get the right vibe — Khan wanted “just the right amount of camp” — leaning into Kiefer Sutherland’s look in “Lost Boys,” as well as 1980s heartthrobs and Dolph Lundgren.
THR’s Brian Davids says the end result is a “Gary Busey meets Zack Morris concoction,” which Khan agrees is an “amazing description.”
DISTINCT GENRES AND TIMELINES
Cohen explains that despite the fact that the movie is a mashup, Khan wanted scenes to fall distinctly into certain categories.
He says, “The comedy parts are the comedy parts, and the horror parts are the horror parts. There aren’t a ton of laughs when the killer is chasing someone with a knife. We worked on the pace of it, so when you’re watching a scene, you don’t know if a joke will pop out next or the killer.
“It involved figuring out ways to play with the cutting patterns and tension so you don’t know what’s going to come next. When things arrive, they come at unexpected moments, whether it’s a laugh or a death.”
Kiernan Shipka in “Totally Killer,” courtesy of Amazon Studios
Khan told The Wrap, “To get in this space and do a mashup and then check your swing a little bit on the horror part, I feel like that would’ve been a disservice. That’s the fun of doing something like this. It’s the idea of keeping the audience off balance with the comedy so you don’t know like, Oh, is this a comedic scene or is this person actually going to get killed?”
Defining Periods
“We wanted transitions to help you know where you were” in time, Cohen explained. “There’s a bit in there, where we cut from the fresh 80s Billy the Beaver to a dilapidated beaver. There aren’t that many scenes in 2023, and they’re straight forward. With the present timeline, we kept it grounded more in comedy and the family life. When you go back in the 80s, it gets a little bit bigger in terms of the volleyball game, the 80s mom who hasn’t tried the cocaine, and all that kind of stuff. Our DP and production designers did a really cool job distinguishing visually between the two as well.”
The comedy scenes set in the 1980s were supposed to feel “inviting… to the audience,” DP Judd Overton told Frame & Reference (listen to the full conversation below). He added that this time frame was supposed to have a “John Hughes” vibe.
With that in mind, Overton says, they determined “having all that anamorphic wonkiness going on wasn’t quite the right way to go for what’s basically an ensemble comedy with a lot of body bits.”
He explains, “It was just too too strong a look. So we what we ended up opting for is the Geckos… rehoused vintage lenses, kind of like a K 30 size, sort of, you know, 70s, vintage glass, but they’re very clean, and they behave nicely. You know, nice fall off, and you get the occasional sort of rainbow flares and things like that. So they’re vintage without being too — you know, they don’t put up too much of a patina between that between the audience.”
However, they did pull out the Orion 21mm for the time machine scenes, in which some disorientation felt more than appropriate, he told Kenny McMillan.
Anna Diaz as Heather Hernandez, Olivia Holt as Teen Pam, Liana Liberato as Tiffany Clark, Stephi Chin-Salvo as Marisa Song, Kiernan Shipka as Jamie Hughes in “Totally Killer”
THE FINAL GIRL TROPE
Khan notes that “the idea of the final girl is something that exists in the genre” of slasher films, and she took the opportunity to give “these women more agency” and update the trope.
As the daughter of lone survivor Pam (Julie Bowen), protagonist Jamie (Kiernan Shipka) has inherited both trauma and a set of survival tools, Khan explains to Rolling Stone. And then the inciting incident of Pam’s murder creates “an interesting handoff for the final girl idea.”
Kiernan Shipka and Olivia Holt in “Totally Killer,” courtesy of Amazon Studios
Khan says, “Something that was appealing to me was the idea that even though Jamie is being hunted, and there is a vicious killer on the loose, she’s actually kind of hunting him. She’s propelling the story in a way that feels new to me because she will not stop until she stops him. That unrelenting drive of this young woman who’s at the center of this movie just feels like a new kind of shade on that idea of a final girl.”
“I think what was compelling about this is … there’s a killer that’s hunting, but the main character is the one that is driving everything, like she is hunting this killer in her own way,” Khan explains to KCRW.
SET PIECES AND STUNTS
Choreography is also key to an effective horror film. “Totally Killer” stunt coordinator Simon Burnett helped Khan “shoot as much practically as we could,” she told Collider. His role required him to manage both a dodgeball fight with “the chaos of war” as well as encounters with a deadly serial killer on a waterbed and a Gravitron carnival ride.
“The first kill or fight sequence was really fun to put together,” Cohen tells 1428 Elm. “It was really fun to get to use the footage of her doing these stunts and just to work on selling the intensity and brutality of that scene. There’s a lot of little editing tricks, like speed ups and cutting a frame here and there. It’s interesting because some small little frame difference can make a difference between whether or not a hit or stab sells or if people think it looks weird.”
The waterbed was designed by Liz Kay and the scene was shot in a real home, so Khan says they used five cameras to capture lots of angles and avoid a reshoot.
And that Gravitron was also real and purchased with very little time to spare for the crucial final sequence. “They were down for all of it,” Khan says of the team at Blumhouse, which she attributes to a genuine enthusiasm for making movies.
To simulate the Gravitron’s movement (a no-go with cameras inside), Khan explains, “DP Judd Overton and his team had a lighting rig going outside because there’s like small cracks in between the panels so you can see the lights moving. And that suggests movement. And then we have the practical effects guys in there with blowers…so [you get] that effect of being stuck to the wall.”
The Gravitron “is as close to outer space as I’ve shot,” Overton joked to Frame & Reference.
Shooting the end scene, he says, “was a real challenge. It was a lot of fun.”
While the crew was limited in the modifications that they could safely make, Overton says they created a practical period look with “some LED strips, but they had a casing on them that basically looks like a like an old neon.” That was important because they “created a chase that really felt that we could build it up, so that as the as the film progresses, and as the Gravitron speeds up, you know, you see it in the lighting.”
“Even though Gravitron wasn’t moving… you feel it,” Overton says of the VFX’s efficacy.
What Do AI and the 15th-Century Printing Press Have in Common? Actually, A Lot
Jan Van Eyck, mirror detail, “The Arnolfini Portrait”
TL;DR
Samuel Hammond, senior economist at the Foundation for American Innovation, argues that the disruptive force of artificial intelligence could be as transformative as the invention of the printing press in the 15th century.
Like the printing press, AI isn’t just about making information accessible, Hammond contends; it’s about challenging the established order.
AI promises endless efficiencies, but also can also create disruption to our institutions and perhaps even our current world order.
In an era where technology is advancing at breakneck speed, it’s easy to overlook the historical parallels that can offer us valuable insights. Samuel Hammond, senior economist at the Foundation for American Innovation, and a prominent voice in the field, argues that the disruptive force of artificial intelligence could be as transformative as the printing press was in the 15th century.
Hammond’s perspective on the potential near-term impact of AI on government and institutions is rooted in history and the transformative power of technology. He draws a striking parallel between the rise of AI and the historical significance of the printing press, suggesting that AI’s impact could be just as destabilizing and consequential in his three-part series on Substack, “AI and Leviathan.”
“My null hypothesis is that the democratization of powerful AI capabilities will be at least as destabilizing as the printing press,” he writes. “The printing press was also a mere information technology, and yet it led to civil wars and uprisings against the established order, and ultimately drove the consolidation of the modern nation-state.”
Hammond delves further into the potential near-term impact of AI on our government and institutions in conversation with Upstream’s Erik Torenberg.
“When you look across history at when there is broad flourishing of society, say, like, the Florentine Renaissance, it isn’t just that they had like the best masonry and the best banking, and so forth. It’s like they had the best everything,” he explains to Torenberg in the video at the top of the page. Looking at today’s world, countries that rank the highest in economic freedom and on the human development index “also have the best functioning governments and the least corrupted markets.”
This phenomenon, which Hammond calls the “X-factor behind institutions,” doesn’t boil down to “geography or demographics or major structural forces, “but there’s also these extra efficiencies that that matter for society as a whole.”
AI promises endless efficiencies but also risks disrupting our institutions and current world order. To move forward, we have to learn how to grapple with the past.
A Brief History: The Printing Press and Its Revolutionary Impact
In the 15th century, Johannes Gutenberg’s printing press didn’t just democratize knowledge; it ignited civil wars, uprisings, and the formation of modern nation-states. Hammond contends that AI could have a similarly transformative impact, challenging existing power structures and fostering more inclusive governance.
In “AI and Leviathan: Part II,” Hammond points out that the English Civil War, a defining moment in the formation of modern governance, occurred at an inflection point in the printing revolution. The first publication of the King James Bible in 1611, just 30 years before the war, and the birth of journalism with the first regularly published English newspaper, the Oxford Gazette, were pivotal. These publications fueled public discourse and, in some cases, dissent, setting the stage for political upheaval.
“These technological regime changes precede institutional regime changes,” Hammond says, noting how the printing press played a role in the consolidation of fragmented kingdoms into modern nation-states, leading to the Peace of Westphalia in 1648.
“The printing press was also a mere information technology, and yet it led to civil wars and uprisings against the established order,” Hammond writes. AI, like the printing press, isn’t just about making information accessible; it’s about challenging the established order.
Samuel Hartlib, an English reformer and contemporary of the printing revolution, aimed to record all human knowledge and make it universally available. Hammond sees a parallel between Hartlib and today’s AI evangelists who advocate for the democratization of AI capabilities.
Ethical and Regulatory Hurdles: Navigating the AI Minefield
As we stand on the cusp of an AI revolution, the ethical and regulatory challenges are manifold. “The issue is not that AI and informational technology are inherently destabilizing. Rather, the issue is that society’s technological base is shifting faster than its institutional superstructure can keep up,” Hammond writes In “AI and Leviathan: Part II.”
When it comes to regulation, Hammond poses a thought-provoking question: “How would you regulate AI?” he asks Torenberg. “I think the right level would be to say, you know, after transformative AI, what does the regulation even look like?”
Elaborating further on Substack, he suggests that our current regulatory frameworks might become obsolete in the face of transformative AI. “Whether or not this leads to a bureaucratic expansion is another question,” he says. “On the one hand, AI could enable regulators to devise rules with fractal specificity, micromanaging things that used to be illegible. On the other hand, the human and physical footprint of our bureaucracies could radically shrink, as even the finest-grained forms of compliance become automatic and thus invisible.”
The ethical considerations are equally complex. Hammond identifies two camps within the AI community: those who advocate for stringent regulation until safety can be assured, and those who favor open-source acceleration. “These schisms are reflected in the different AI camps,” he notes on Substack. “Some favor regulating AI development until we can assure perfect safety, perhaps through a licensing regime. Others wish to plow forward, accelerating access through open source.”
Hammond also sees the democratization of AI as a double-edged sword. While it has the potential to unlock new capabilities, it also comes with significant negative externalities. “As AI democratizes capabilities with significant negative externalities,” he writes in “AI and Leviathan: Part III,” “it will simultaneously unlock new institutional forms for dealing with those externalities.”
The Future is Now, and History is Our Guide
AI by no means the first disruptive technology to emerge since the 15th century, but navigating its transformative potential requires a nuanced understanding of its historical parallels. “Software will eat the world,” Marc Andreessen once famously said and, just like software, AI is now eating the world, Hammond posits.
“Anytime we’ve had these technical technology shifts, it’s really shifted the balance of power between nation states and society,” he tells Torenberg. “And I think this one is no different.”
With this new technological frontier on the horizon, the importance of learning from history cannot be overstated. Decisions made today will undoubtedly shape tomorrow’s institutions.
Across Media & Entertainment sector, the urgency of these considerations intensifies. AI has already made its mark, influencing everything from content creation to audience engagement. Yet, Hammond warns that this transformative game-changer comes with its own set of destabilizing risks.
Moving forward, a balanced approach is essential. We must embrace the optimism that AI’s capabilities bring, he says, while exercising caution informed by historical lessons. After all, in this rapidly evolving landscape, history remains our most reliable guide.
MIT’s Daniela Rus explains, “AI can give us a lot of benefits, but the same benefits can empower supervillains.”
October 30, 2023
Posted
October 23, 2023
Virtual Production: How to Shift Your Perspective
“I don’t say virtual production so much anymore, as much as real-time production, real-time workflows,” Christina Lee Storm tells Lori H. Schwartz in an interview ahead of NAB Show New York. Watch their full conversation (above).
“When I think about virtual production, I think much, much more holistically,” Lee Storm says. “It’s six different attributes that all come together to really help a creative, help the filmmaker, and their process.”
She adds, “Virtual production is not just like a volume or a stage, and there’s still a little bit of a lag on education on that.” Rather, Lee Storm explains, “It’s part of a mindset of collaborative real time workflow.”
Lee Storm is noticing improvements that “optimize workflows.” But she finds collaboration tools particularly exciting because they enable “creatives to, sort of, be in a place, be in a sandbox to work together.”
“There’s so much going on with real time pre-vis, and things like that, and how that’s going to sort of evolve and change and shift,” Lee Storm says. She notes that animation is further along with using this type of technology and could provide a roadmap for how this may progress.
“There’s still lots of challenges. Again, it’s emerging tech. And really, I think, at this point, we can have some best practices,” she says.
Gen Z Is Changing Everything
Also emerging, Lee Storm says, are changes in M&E advanced by Gen Z’s preferences.
“Their viewpoint of entertainment is totally different. It’s much more holistic, it’s much more fluid. It’s going between apps, spaces and things,” Lee Storm explains.
Because of these shifts, she encourages us to think of “the entertainment umbrella,” which Lee Storm says is “basically being a little bit more holistic of how we view entertainment.”
“Some of it is disruptive,” she says, noting the shift in which IP is now considered a launch point. It’s no longer limited to film – although that’s still viable – but they can now look to gaming, user-generated content, social media, or even live entertainment.
Lee Storm describes the influence of both TikTok and live entertainment as “huge,” noting an interesting interaction “between live and digital.”
Additionally, she says, “This is all sort of pointing to also Gen Z really wanting to be much more participatory, as well as interactive and immersive.”
Interactivity and immersive technology is “taking that next step of storytelling,” she says.
Get in on key trends and powerful intel at NAB Show New York’s Insight Theater! This is where you can catch up and connect the dots. Between process and products. Between the ways we now create and consume content. This is an intimate space to glean coveted insight and interact with the people and products transforming the industry.
A trio of industry pros discuss current technology trends pushing the boundaries of storytelling as a prelude to NAB Show New York.
October 30, 2023
Posted
October 22, 2023
Dave McKean: Generative AI Creates “Stuff” (Not Art)
TL;DR
“Prompt: Conversations With AI” is illustrator and director Dave McKean’s response, in the form of graphic stories, to learning that his work had been fed into AI learning data sets.
McKean is very concerned about AI’s impact on society at large and its influence on creativity in particular. He believes that effort and process are crucial to making art.
While he has many problems with how generative AI has been developed and is being used, McKean has and continues to experiment with gen AI as a tool for his work, understanding that it will not be going away in the future but rather become ubiquitous.
Many creatives have been struggling with our feelings about generative artificial intelligence. Illustrator and director Dave McKean went one step further as he grappled with AI: He wrote a 96-page book as he processed how this technology would change his work… and our society.
Prompt: Conversations With AI is McKean’s response, in the form of graphic stories, to learning that his work had been fed into generative AI data sets.
“I’m just an individual human being, trying to make sense, to work out what the hell’s going on, mostly by making marks on paper,” McKean told an audience at Film and Media Exchange, where he was invited to share his thoughts on AI. (Watch the full keynote and Q&A below.)
His initial reaction was along the lines of… not good.
“I did about 10 minutes’ research on what Midjourney was and what artificial intelligence image creation was. And then I spent a day in a fetal position on the floor of my studio,” McKean said.
McKean’s feelings have evolved, but he still maintains strong reservations about AI’s use in creative endeavors.
“I’m very much a critic of this. But a critic who wants to understand it,” McKean said.
AI Explorations and Experiments
After his initial panic, McKean determined he’d need to explore AI’s capabilities, so he began to experiment with Night Cafe, a free tool.
“I fed lines from Roadside Picnic into the bot, and it spat out a few raw materials,” McKean said. But ultimately, he “found it quite hard work to get anything out of it that I liked in any way.”
Later, McKean tried another tack: using it as a shortcut to design an album cover for a friend. “I didn’t really have a firm idea of what it was … so I typed eight words into Midjourney.” He said the image generated “looks like my work, so I used it” and liked it. (His friend’s band ultimately did not.)
However, that initial result also freaked him out and caused McKean to reach out to a neighbor (who happens to be an expert in AI) to understand how the bot was able to recreate his style with such a general prompt. “I just wrote some very bland words in that I thought might put it in a certain kind of image, but I didn’t use my name at all,” McKean recalls. “And she said, ‘Well, it probably knows your IP address.’” (An unsettling idea!)
His next endeavor involved translating the Epic of Gilgamesh into images using AI. He wondered, “Would anything of the story survive that weird conversation?” The end result was that some of the plotlines were discernible to someone unfamiliar with the myth (as tested on McKean’s wife).
McKean also experimented with feeding newspaper headlines into the bot as a way “to see how AI sees us.” He did this over the course of a month, which also happened to be the time frame when Blake Lemoine was fired because he was worried about chatbot sentience, as well as the month of the Uvalde, Texas, massacre.
“All kinds of strange images came out of this,” McKean said.
Strange and at times disturbing. McKean ultimately did not include the AI-generated description of the school shooting, the only image he “censored” from inclusion in the book. “What I took from that is that AI has no feelings to hurt, doesn’t care about hurting our feelings,” he said.
His final experiment was to feed his questions about AI…into the AI. He also included photographs that he’d take on his daily walks (which he’d use to create his AI musings), and created renderings combining the images and questions. After enough re-renderings, “Everything started to look like my work,” he said.
Art Vs. Stuff
McKean thinks that generative AI has created “ethical black holes” that “are obviously really pressing.”
Beyond the legality and issues like copyright, he is concerned about more philosophical aspects, like: “What is the nature now of art and creativity, if all that is happening is somebody’s tapping away?” and “Where do any of us fit into this anymore? And is this the future we want?”
McKean has answered some of these questions, for himself, anyway.
First, he said, “I don’t call it AI art. It’s AI stuff. It’s a great generator of stuff, an endless plane of stuff.”
While that sounds rather derisive, he also admits, “It remains a powerful tool for generating curious and surprising stuff. And that stuff when curated by interesting prompters like my friends, Mario Cavalli, and Ryan Hughes, for example, can be genuinely engaging. It can be used as raw material for further collage or physical media work, and can therefore be part of the creative process.”
He said, “The images that it coughs up are genuinely surprising. And surprise is a big part of creativity. It’s a big part of what we hope to get in our own work.”
Nonetheless, when it comes to creativity, McKean said, “It’s the challenge that’s important. And it’s in the doing of it. And for it not to be easy, and to do things that surprise yourself, and to find things along the way that you would never have discovered about yourself, or about the thing that you end up doing.”
McKean explains, “The AI equivalent of going for a walk is just a teleport to the final destination. Well, that’s not a walk, is it? That gives you nothing of what you want to go on a walk for. It doesn’t get blood pumping, or air in your lungs, or it doesn’t get you — allow you — to try a path that you wouldn’t have come across if you hadn’t gone on that walk or that journey. That’s creativity.”
Even worse, McKean said, “There’s no magic involved with Midjourney. It’s boring and tedious.”
As an artist, he said, “I don’t want to spend my life tapping away on a keyboard. This is so boring. I like making things and getting my hands dirty. So I’ll play with it until I’m bored with it, which is 10 minutes.”
Legal and Ethical Challenges
“Even though I don’t think there’s any chance of returning any or many of the contents to Pandora’s box, I do think the total lack of any forethought on the part of the radical tech fundamentalists has to be called out,” McKean said. “The argument that it is just a tool remains hopelessly naive. It is a tool, but not just a tool. We are racing to keep up with the implications of how it will change society.”
“There are lots of people trying to take these people to court trying to get some sort of opt out, trying to get some sort of royalties paid whenever your name is coughed up. I honestly don’t hold out much hope that things are going to change,” McKean said.
One of his own bright lines is not using AI to copy another creative’s style directly. “Putting a prompt into Midjourney using another artist’s name, as far as I’m concerned, is obviously wrong. I’ve never done it. And I hope none of you have either,” McKean told the audience.
He also scoffs at the idea that AI programmers can’t solve some of the problems we’ve already been presented with.
For example, McKean said, “If it is possible to blockchain an image to prove its ownership no matter what turbulence happens to it online, then it must be possible to watermark AI activity to at least prove beyond doubt, and conspiracy theorists that the photo realistic image of a beloved television news anchor in sexual congress with a koala bear is fiction.”
Thrown into sharp relief by the Hollywood strikes, the debate over human creativity versus generative AI has become a battleground.
October 22, 2023
Majority Report: How Do We Monitor AI?
TL;DR
Russell Wald, the director of policy for Stanford’s Institute for Human-Centered AI, argues for regulation that recognizes the unique benefits of AI for humanity as well as that there are some very serious dangers.
Aside from regulation, Wald thinks the US needs a national AI strategy with politicians educated about the issues, greater transparency of data models, the involvement at policy level of academics and other noted society leaders, and an emphasis on STEM skills to build the workforce.
He looks to the EU and the UK lawmakers for guidance as to how the US should police AI at home.
It is urgent that we regulate synthetic media and deepfakes before they undermine our faith in the truth, says Russell Wald, the director of policy for Stanford’s Institute for Human-Centered Artificial Intelligence.
“I’m concerned about synthetic media, because of what will ultimately happen to society if no one has any confidence in what the veracity of what they’re seeing,” he says in an interview with Eliza Strickland at IEEE Spectrum about creating regulations that are able to cope with the rapidly evolving technology.
“You’re not going to be able to necessarily stop the creation of a lot of synthetic media but at a minimum, you can stop the amplification of it or at least put on some level of disclosure, that there is something that signals that it may not be in reality what it says it is,” he says.
The other area that Wald thinks would help in terms of overall regulation is greater transparency regarding foundation data models.
“There’s just so much data that’s been hoovered up into these models, [but] what’s going into them? What’s the architecture of the compute? Because at least if you are seeing harms come out at the back end, by having a degree of transparency, you’re going to be able to [identify the cause].”
Of calls for regulation coming from AI developers themselves, Wald is scathing, “For them, it really comes down to would they rather work now to be able to create some of those regulations versus avoiding reactive regulation. It’s an easier pill to swallow if they can try to shape this at this point.”
What he would really like to see is great diversity of viewpoint in the discussions and decision-making process, not just from those in the tech industry, but from academics like himself and from law makers.
“Others need to have a seat at the table. Academia, civil society, people who are really taking the time to study what is the most effective regulation that still will hold industry’s feet to the fire but allow them to innovate?”
This would mitigate the risk of inherent bias in certain algorithms on which decisions in judicial systems or legal systems or medical contexts might be based.
Like many academics with knowledge of the subject, Wald calls for a balanced approach. AI does have significant upside for humans as a species he says, pointing out the unprecedented ability of AI to sift through and test data to find solutions for diseases.
“At the same time, there’s the negative that I am truly concerned about in terms of existential risk. And that is where the human comes into play with this technology. Synthetic biology, for instance, could create agents that we cannot control. And there can be a lab leak or something that could be really terrible.”
Having given a precis of what is wrong, Wald turns to potential solutions by which we might regulate our way out of potential disaster. This is multi-pronged.
“First, I think we need more of a national strategy, part of which is ensuring that we have policymakers as informed as possible. I spend a lot of time in briefings with policymakers and you can tell the interest is growing, but we need more formalized ways of making sure that they understand all of the nuances here,” he says.
“The second part is we need infrastructure. We absolutely need a degree of infrastructure that ensures we have a wider degree of people at the table. The third part of this is talent. We’ve got to recruit talent and that means we need to really look at STEM immigration, and see what we can do because at least within the US the path for those students who can’t stay here, the visa hurdles are just too terrible. They pick up and go, for example, to Canada. We need to expand programs like the intergovernmental personnel act that can allow people who are in academia or other nonprofit research to go in and out of government and inform governments so that they’re more clear on this.”
The final piece in Wald’s argument is to adopt regulation in a systematic way. For this, he looks to the European Union, which is one of the most advanced territories in terms of formulating an AI Act. However, this is not expected to be ratified for at least another year.
“Sometimes I think that Europe can be that good side of our conscience side and force the rest of the world to think about these things. This is Brussels effect — which is the concept Europe has such a large market share, that they’re able to force through their rules and regulations, being among the most stringent and it becomes the model for the rest of the world.”
He identifies the UK’s approach to AI regulation as a potential model to follow because it seems to be more balanced in favor of innovation.
“The Brits have a proposal for an exascale computing system [to] double down on the innovation side and, where possible, do a regulatory side because they really want to see themselves as the leader. I think Europe might need to look into as much as possible, a degree of fostering an environment that will allow for that same level of innovation.”
Wald’s concern that AI will stem innovation is not to protect the larger companies, who can look after themselves, he says, but the smaller players might not be able to manage to continue if the law is too stringent.
“The general public should be aware that what we’re starting to see is the tip of the iceberg,” he warns. “There’s been a lot of things that have been in labs, and I think there’s going to be just a whole lot more coming.
“I think we need to have a neutral view of saying there are some unique benefits of AI for humanity but at the same time, there are some very serious dangers. So the question is how can police that process?”
With nearly half of all media and media tech companies incorporating artificial intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
MIT’s Daniela Rus explains, “AI can give us a lot of benefits, but the same benefits can empower supervillains.”
October 19, 2023
Martin Scorsese: “Killers of the Flower Moon” and American Mythology
Making of “The Killers of the Flower Moon”
TL;DR
During an ever-fascinating tour through his life and career in an interview with Edgar White at the BFI London Film Festival, 81-year old director Martin Scorsese decries the demand for “content.”
As filmmaking technology becomes easier to access, he hopes filmmakers can use that accessibility to tell stories that matter.
He explains how he had to rework “Killers of the Flower Moon” to concentrate on a central love story, rather than the FBI.
Martin Scorsese continues to fear for the future of cinema, deriding the demand for “content.”
He likens “content” to having TV on in the background, or the noise of a radio while you go about your daily business.
“And now, of course, I keep TCM on as much as possible.”
For Scorsese, naturally, movies are an art form and going to the cinema is almost spiritual. “The experience of seeing a film with a lot of people is really still the key, but I’m not sure that that can that can be easily achieved at this point,” he said.
“I’m afraid that the spectaculars, or what they call franchise films, will be taking over the theaters. I always ask for the theater owners to maybe create a space where younger people [could see] a new film, which is not a franchise film, sharing it with everybody around them. So that they want to go to the theater, and it doesn’t get you to the point where [they] could see it at home.”
TRAILER FOR “THE KILLERS OF THE FLOWER MOON”
Old Fashioned… Innovation
Scorsese has made his thoughts felt on this before, and most people will be sympathetic to his cause. Naturally, he continues to make movies the old-fashioned way: on film, on location, with big themes and subjects told at length. His latest, The Killers of the Flower Moon, clocks in at nearly three-and-a-half hours.
The celebrated director is not immune to digital technology, though. He used cutting-edge techniques, of course, to de-age actors for The Irishman. He understands that the use of digital video cameras enable younger filmmakers today to make movies on their own terms, just as he did with breakthrough drama Mean Streets half a century ago.
“If I was able to shoot digital, or even just video, I would have shot Mean Streets, and that way, I wouldn’t have to pay for the lights… which means we don’t need the studio.”
Scorsese’s inspiration for Mean Streets (1973) was John Cassavetes’ verité work Shadows (1959), and he thinks similar freedom is afforded to filmmakers using digital technology.
“It’s so much freedom that I think you have to rethink what you want to say and how you want to say it and use that technology,” he warns. “Ideally, what I hope is that — and I hesitate to use the word — but serious film could still be made with this new technology so that it can be enjoyed by an audience on a big screen.”
Percorso dei Ricordi
Wright did well to steer Scorsese, always a vivacious and passionate communicator, through a tour of his entire career, playing clips from movies including Mean Streets, Taxi Driver, Goodfellas, The King of New York and more in a roughly 90-minute presentation on stage.
Before all this, the 81-year-old filmmaker revealed that he had a “natural enthusiasm to want to share an experience” with other people. “We couldn’t afford to go to theater so it was in the movie theaters at the time. I wanted somebody to share and enjoy it with — all together. At a certain point, I began to get very, very excited by sharing as much as possible my experience with younger filmmakers. And then from their films, I get reinspired. It opens up a whole world,” he said.
“I always thought of myself more as a teacher than as a filmmaker.”
Leonardo DiCaprio as Ernest Burkhart and Lily Gladstone as Mollie Burkhart in “Killers of the Flower Moon.” Cr: Apple TV+
Lily Gladstone as Mollie Burkhart and Leonardo DiCaprio as Ernest Burkhart in “Killers of the Flower Moon.” Cr: Apple TV+
Robert De Niro as William Hale and Leonardo DiCaprio as Ernest Burkhart in “Killers of the Flower Moon.” Cr: Apple TV+
We learn that before making films he made what he calls storyboards, or frames, of stories, shot by shot. Sadly, “the earliest ones I destroyed. I made my own versions and color painting watercolors, but then I threw all that away,” including it seems a “Roman epic, which I never finished,” but had planned for cameras booming down on a Roman legion entering the gates of Rome.
He explained that during his time at NYU making short films, his technique for making movies came first, but the stories and themes he really wanted to explore came later. At college in 1964 he was inspired by seeing Bernardo Bertolucci’s film Before the Revolution. Bertolucci was only 22 at the time, and it was already his second film.
“I wanted to be able to express myself that way. It had such a joy of not only filmmaking, but of life, and it had such depth of culture, but I don’t come from that culture. I don’t look at politics. Ultimately, I had to find a way of expressing myself from my own culture. What kept me going was the ambition and the determination to reach that level that I had seen in Before the Revolution.”
He shared that the character of Johnny Boy played by Robert De Niro in Mean Streets is based on someone who is still alive and that the playfulness of Johnny Boy and Harvey Keitel’s character in the film stemmed from “Abbott and Costello or Bing Crosby and Bob Hope” on the road movies.
Later, he notes that the notorious scene in Goodfellas — in which Joe Pesci’s character says, “You think I’m funny. Funny how?” — “actually happened to Pesci… he took a chance with this guy and he got out of it but it was terrifying. And so that kind of banter, so to speak, structured, joking, enjoying themselves puffing up… that sort of thing was something that we did intentionally.”
For De Niro’s famous line reading of “You talkin’ to me?” in Taxi Driver, Scorsese says, “Bob improvised. I asked him to talk to the mirror. There’s a shot of him controlling the gun, which kind of comes from Shane [a film he knew was special when he watched it at age 12]. But primarily I was at his feet. And I was just saying do it again. And he would do it again. And then he just got into a rhythm, you know.”
He credits editor Tom Rolf with cutting this scene “so beautifully constructed,” (editors Marcia Lucas and Melvin Shapiro also cut the picture) and has warm words for Thelma Schoonmaker, with whom he worked on his first film, Who’s That Knocking At My Door, and then on every picture since Raging Bull.
“What’s great about Thelma is that she has no film theory — she brings just the passion, the philosophy of it, and really has no preconceived ideas about which filmmakers are more important. So we could just look at the footage like we used to work in documentaries back in the 60s.”
Robert De Niro as William Hale and Jesse Plemons as Tom White in “Killers of the Flower Moon.” Cr: Apple TV+
His modern classics like Raging Bull, The King of Comedy and The Last Temptation of Christ all came from personal sacrifice and a tenacity on Scorsese’s part to “will them into existence,” said Wright.
“I agree, but part of it was I still have the desire to utilize film to tell stories — but what the hell story did I want to tell? The kind of stuff that was being made in the in the studio system just wasn’t working for me. They tried, they really tried to give me different ideas. I said, ‘No,’” he recounted.
“Ultimately, it’s a battle and not giving up. And maybe the films weren’t as good as the ones [I made in] the ‘70s but it didn’t matter. I had to get certain things made the way I wanted to make them.”
For the main characters in The King of New York, played by De Niro and Jerry Lewis, the director says, “I think I was having trouble coming to terms with myself. How much of me is Rupert, how much of me is Jerry, how much of me wants to be Jerry. But Jerry is a mess too. I mean, all this going on. And it may be a little too close to home. It was extremely uncomfortable. I had difficulty shooting it because of that, and in the editing too.”
He says Schoonmaker and her husband, the late great British filmmaker Michael Powell, helped him get over the finish line.
Scorsese’s First “Western”
And so onto Killers of the Flower Moon, the nearest Scorsese has come to making a Western.
“I grew up watching westerns. And I loved them because, you know, I couldn’t go anywhere. I couldn’t go near animals, I couldn’t run, I couldn’t do so because of the asthma. Whenever they tried to take me to a park or whatever, I started getting an asthma attack or get an allergic reaction to all the nature around me. So for me to see it on the screen — beautiful palomino horses — this was heaven for me.”
From “Killers of the Flower Moon,” Cr: Apple TV+
As other film historians have argued, he identifies the end of the classic genre with Sam Peckinpah’s The Wild Bunch (1969), “that ended part of the history of America too.”
Now he is using the form to retell a true story which deconstructs foundational American myths, but his approach changed radically during its writing with script writer Eric Roth. Instead of focusing on the FBI agent, which was to be played by Leonardo DiCaprio, the story shifted to the love affair between a white settler and an Osage Nation woman.
“We had to really rethink the picture, but I was glad that I had that time during COVID to rethink it. Eric said to me, ‘Where’s the heart of the movie? And I immediately said, ‘Well, it’s Molly and Ernest because they’re in love.’ And I found that out from hanging out with the Osage in Oklahoma, that it isn’t as simple as people coming in and shooting and poisoning. It’s the betrayal of trust. They said, ‘You have to understand Molly, that despite everything, they were in love.’”
If the heart of the picture was going to be Molly and Ernest, than DiCaprio decided he should play Ernest. “We had to take the script and rip it inside out. Which is what we did.”
Lily Gladstone and director Martin Scorsese on the set of “Killers of the Flower Moon.” Cr: Apple TV+
The story may appear to be unearthed from the archive but Scorsese points out that it has been present in numerous magazines “like Penny Dreadfuls,” and also articles in Harper’s Magazine, as well as in Hollywood.
“Even in musical numbers, in movies in the ‘50s, there was always a number with the Native Americans dancing around with oil shooting up behind them. But the thing about it was that by 1958 nobody remembered it, except when they made The FBI Story,” in 1959 directed by Mervyn Leroy and starring Jimmy Stewart, which is “basically a greatest hits of the FBI,” he said.
“I thought of actually recreating the shooting of that film towards the end of my film, but, instead, it went another way.”
From director Martin Scorsese’s “Killers of the Flower Moon.” Cr: Apple TV+
To create what he considers a true cinematic experience, Christopher Nolan shot “Oppenheimer” in 65mm, 15-perf IMAX format.
October 30, 2023
Posted
October 19, 2023
New Tools That Are Advancing Character Animation
TL;DR
Character animation using traditional software and mo-cap processes could be accelerated up to eight times faster using new tools.
The workflow to create a new five-minute video to a very tight schedule is explained by the project’s VFX supervisor.
Creating content on YouTube with Reallusion software often leans towards photorealism, but this project aimed for stylized, toon-like characters.
Character animation is an immensely labor-intensive process, but technology demonstrated in the production of a new short film is radically reducing that time.
Creative agency The Tomorrow Lab collaborated with First Person to create “The Mañana Cabana,” a short film introducing the idea of AI assistants for client CableLabs.
“We were only going to have five weeks to complete post production on this five-minute film, so we needed a different approach, if we were going to be successful,” explains the project’s VFX supervisor, Geoff Hecht in this video explainer. “We were able to do it in an eighth of the time that it might have taken using other tools and techniques.”
Creating content on YouTube with Reallusion software often leans towards photorealism, but this project aimed for stylized, ‘toon-like characters.
However, the tight schedule demanded efficient decisions, beginning with researching stock libraries for characters.
Normally, a stock character’s compatibility with animation software matters. However, Reallusion’s tools are platform-agnostic, we learn from Reallusion itself.
Character Creator’s iAvatar system provided facial and body controls. The main character, Louie, began as a stock model and was adapted in iAvatar with Character Creator used to animate full facial capabilities.
“iClone has three great tools for the facial animation of characters, and we used them all for the Mañana Cabana project,” Hecht explains. “We began by using iPhone Live Face where we can link our animator’s face to control the facial movement of that character. This created a very natural foundation, but we needed an extremely accurate lip sync animation, so our second step was to use the AccuLips feature inside of iClone.
In Acculips, they lined up editable keyframes for each mouth shape in the timeline. “We found, however, that if you want to hold a lip pose longer like an ‘M’ or ‘O’ shape, you can stack two of the same shape keys next to each other in the timeline. We felt that this was a more accurate result.”
The character had to perform to the camera, so with the first two techniques applied, they finished off the animation using the Face Key tool within iClone.
“This allowed us to specifically point the characters, eyes, eyebrows, face, etc. in whatever direction we felt making it look more like the actor and characters were interacting with each other,” he says.
Character animation in traditional pipelines can take eight-hour work days to create four seconds of animation. Hecht has used Maya, Cinema 4D, and Blender for character animation projects, and started using Autodesk Maya in 2002. He says animating in iClone is more like if you combined Adobe Premiere with Maya, “You can quickly drop animation clips into your timeline, and your character just comes alive.
“It’s so fast to just try things, and if it doesn’t work out, no big deal, we’ll try something else out five minutes later. I think some people have stayed away from motion capture because it wasn’t easily editable. But iClone has the ability to mix and match mocap clips, taking the parts that you like and removing the parts that you don’t as well as adding hand keys. It’s a real hybrid system.”
Evan Shapiro Amplified: Welcome to the User-Centric Era of Media & Entertainment
TL;DR
In “Evan Shapiro Amplified: What’s Next,” media’s official Unofficial Cartographer warns that the Media & Entertainment industry is already in a new era, which companies will disregard at their own peril.
This new user-centric era officially began in 2019 with the launch of Disney+, surpassing Netflix in total worldwide subscribers in just three years.
Shapiro urges media companies to let go of outdated practices and embrace new strategies that will propel them into the future.
He points to “The New York Times” as a case study of a traditional media company that has successfully shifted to a user-centric business model.
Data doesn’t lie, and nobody interprets it quite like media universe cartographer Evan Shapiro. In this exclusive Q&A, “Evan Shapiro Amplified: What’s Next?” he sifts through the numbers… and forget about clinging to the past; Shapiro’s new research is about to drag Media & Entertainment into a future it can’t ignore. Watch the full discussion in the video at the top of the page.
The New Media Ecosystem: The User-Centric Era
According to Shapiro, our current user-centric era officially began in 2019. “When Disney+ launched, they got 10 million new subs in one month,” he explains. “In less than three years, they passed Netflix for total worldwide subscribers, and in the process bringing everyone else from traditional media along into this brave — or scary — new world.”
Yet, many media companies are still struggling to adapt. “So we find ourselves in a world where these leaders from past eras… are now scrambling to make up for quickly falling revenues and audiences in traditional media,” he says.
With the media landscape in constant flux, Shapiro’s new map of the 2023 Global Media Ecosystem serves as a navigational tool for understanding these seismic shifts. Scaled by communities and revenue, the map provides a visual guide to changing consumer habits and the choices they face.
One of the most significant shifts has been the rise of big tech companies in the media landscape. Shapiro doesn’t mince words when describing the influence these companies have. “These massive big tech Death Stars and the disproportionate gravity that they have in the ecosystem… enables them to invade the markets of the traditional players in these segments, and take share mindshare revenue share attention share away from the traditional players.”
In this new ecosystem, consumers are not short on choices. When they pick up their devices, they’re greeted with a plethora of options. “Now we’re in a period or an era where the consumer curates their own media bundle on their phones,” Shapiro says.
Media companies can no longer afford to look at their services in isolation. “When you look at where your life’s blood of audience and revenues are coming from, you have to look at these things all in concert of each other. You cannot take them as individual pegs inside a larger ecosystem, you have to see the connectedness now,” he advises.
If you’re still waiting for the user-centric era, you’re behind the curve. “We’re not approaching it. It’s not coming. It’s not on its way. It’s here and it’s been here for some time,” Shapiro emphasizes.
As Shapiro sees it, traditional media companies may soon find themselves part of a larger lifestyle bundle. “That is a big part of the mandate, old-school big media that we’ve known for the last 50 years will be subsumed and become part of larger enterprises that are offering a lifestyle bundle of services that include things like free delivery, or streaming music, or books or gaming, or software, more cloud service,” he predicts.
Case Study: The New York Times
Shapiro points to The New York Times as a prime example of how media companies can not only adapt but thrive in the new user-centric era of media. The NYT’s journey offers valuable insights for any media enterprise looking to navigate today’s complex landscape.
The New York Times doesn’t have 200 million subscribers, yet it’s thriving because it’s super-serving a highly engaged, valuable set of consumers. “It sounds simple, and yet, almost no one’s doing it,” Shapiro notes. He adds, “Scale for scale’s sake is not the quest here, super-serving a very highly engaged, valuable set of consumers across a set of revenue inputs and outputs — that’s the assignment now.”
In the early 2010s, The New York Times found itself in a precarious position. “The New York Times was in a nosedive,” Shapiro recalls. The media giant seemed on the brink of either dissolution or absorption by a larger enterprise. The Times took a bold step by bringing in fresh leadership from outside the American media ecosystem. “They hired the head of the BBC, Mark Thompson, someone who had never run a commercial enterprise before,” he notes.
Thompson’s arrival marked a sea change for the organization. “He took money out of the print newsroom and put it into the digital newsroom,” says Shapiro. The shift from a print and advertising-centric model to a digital subscription focus was radical but necessary. The strategic pivot paid off handsomely. “In 2022, where the rest of digital media was suffering, The New York Times had a surge not just in subscribers, but also in revenues and profits,” Shapiro highlights.
The Times also shook up its internal structure. “They hired a bunch of young, late 20s, early 30s product managers and thought leaders and empowered them to make changes at the enterprise,” he says.
The Times has expanded far beyond traditional news. “They have one of the most important podcasts in the world. They make television for FX and are a very important gaming company,” Shapiro points out.
Despite these changes, the Times has stayed true to its core mission. “Their mission to educate and inform the world remains intact. They did not sell their souls in order to succeed,” he says.
This case study serves as a blueprint for other media companies. “The opportunity is to find new ways to get [these services] in the hands of today’s consumers,” he advises.
The Road Ahead
As we navigate the complexities of the new media ecosystem, Shapiro leaves us with some actionable insights and a sense of optimism for what lies ahead. He emphasizes the importance of recruiting — and listening — to the younger generation within your enterprise.
“Listen to people who perhaps haven’t always been there, who might have better ideas on what’s actually coming because they’re the consumers you’re looking to serve,” he advises. In other words, the Gen Z and Gen Y employees within your organization aren’t just staff; they’re also the consumer base you’re trying to reach.
The media landscape is evolving, and so should your business model. Shapiro encourages media companies to think outside the box: “Think about new ways to generate revenues to keep consumers satisfied, that aren’t necessarily your traditional business model.” This could mean diversifying revenue streams or rethinking your content strategy to better serve your audience’s needs.
This is more than a moment; it’s a movement. The user-centric era is here to stay, and the time for action is now. Don’t get left behind — embrace the future today.
Despite the challenges ahead, Shapiro sees a bright future for the industry. “We can get back to growth, we can get back to relevancy, we can get back to opportunity inside the media ecosystem,” he says. The key is to let go of outdated practices and embrace new strategies that will propel your company into the future.
Stay tuned for the next installment of Evan Shapiro Amplified, where media’s official Unofficial Cartographer will dive into the biggest industry trends we can expect in 2024.
Media universe cartographer Evan Shapiro charts a course through the current media apocalypse towards rebuilding the industry.
October 17, 2023
Casey Neistat: Create First and the Money Will Follow
Casey Neistat on the move
TL;DR
YouTuber Casey Neistat turned to that platform because he appreciated the direct distribution model and relative freedom to create videos that he believes in.
Neistat is worried that more and more content is being created for commercial success and based more on analytics than artistic vision.
However, he thinks it is wonderful that the barrier to entry for filmmakers and creators has dropped. Neistat says he understands why young people now want to be content creators rather than directors.
“The greatest asset I have as a filmmaker is the fact that I never went to film school,” Casey Neistat told the audience at Bild Expo.
He explains, “If I had known how to make movies the proper way, then, I would have never gone to Walmart and bought two of their cheapest cameras and turned that into a show that was on HBO. It was that lack of understanding of the right way to do things that forced us to find our own path. And I think that was a virtue.”
Neistat is probably most famous for his work as a YouTuber, but he notes that wasn’t his original goal. After all, his current career “wasn’t an option” when he first started creating videos.
He believed that playing around with iMovie 1.0 and experimenting with exporting video to VHS tapes would be the stepping stone to becoming a filmmaker. And despite all the challenges, Neistat says, “I was able to use those stepping stones ultimately to actually become a formal filmmaker and make movies that were in film festivals and shot on celluloid like real film.”
Neistat did, eventually find success, but he also chafed at some of the strictures inherent in working with large companies.
In 2010, “Daddy Longlegs” (a collaboration with the Safdie brothers) premiered at Cannes and Sundance, winning several awards.
That same year, Neistat says, “HBO bought my show [“The Neistat Brothers”]. I felt like I had achieved a tremendous amount of success, more than I could have imagined in the traditional film world. And I kind of hated it. Like I hated the process.”
Specifically, Neistat disliked the distribution part. First, HBO waited two years to release it — and then, he recalls, they slotted it in at midnight on Friday. Neistat says, “I felt completely out of control. Like we put our everything into making this content and the method of sharing it was just so misaligned.”
So he thought, “I don’t want to do this anymore. I just want to get back to being that kid… making the videos that I liked, and just figuring out the best means of getting them to people.” Then he says, “I put my head down and just started focusing just on YouTube.”
And miraculously, it worked out.
“I spent years making this HBO show, years getting it out to HBO, all this marketing behind it. Nobody saw it, and then I make this silly little video and you know millions and millions of people saw. That showed me something about distribution,” Neistat says.
It also showed him the potential “commercial opportunities” of YouTube.
Getting Started
Today, Neistat says, “Instead of looking to like Kubrick or Spielberg to say, ‘That’s what I want to be like,’ now, people are looking at creators, saying, ‘That’s the world that I want to be in.’ So I think, aspirationally, there’s been this gigantic, this seismic shift.”
And why wouldn’t people want to be YouTubers in the current environment?
Neistat says, “Formal filmmakers like me, we’re starting to turn to YouTube. And now, you’re seeing YouTubers that just have incubated their talent so profoundly, that they’re able to reach out and go to new places. Bo Burnham with his movies. And what he’s done is he’s a tremendous example of that.”
But nonetheless, Neistat says, “The most undervalued aspect is patience. Patience is everything. Like if you’re starting today, and you’re, like, I want to do this, if you’re not prepared to do it for the next 10 years without any success, then don’t start at all.”
He warns, “I think that YouTube and social media has really messed up our perspective on that, there’s this expectation that you should be able to explode, this expectation because the barrier to entry is so low.”
“Everything now that you’d ever need access to be a great filmmaker is kind of at our fingertips. And I think that’s an amazing thing,” Neistat says. He explains, “We now have access to all of that stuff [to make films], even if for you, it just means a phone.”
Mr. Beastification Vs. Finding Your Voice
In addition to concerns over unrealistic expectations, Neistat is also worried about YouTube’s increasing homogeneity, or “MrBeastiffication,” as he refers to content that is driven by analytics.
In contrast, Neistat says he lives by the Rick Rubin maxim: “the time for commercial considerations is after the work is complete.”
For Neistat that means, “Your movie is complete, then you say okay, how do we? How do we actually make money off of this? And that’s what I believe. That’s what I subscribed to. That’s what I do.”
But Jimmy Donaldson’s approach to creating YouTube videos “is the very opposite of that,” Neistat says. “He has a team of geniuses that break down his video second by second to break up. When was the peak engagement. OK, when did we lose engagement? Alright, what was happening in that moment? How do we get rid of that? It is the scientific approach to making videos that has worked when it comes to really garnering views and making this incredible business and industry that he is uniquely created. I commend him for that. But that’s not why I’m in it at all.”
Nonetheless, Neistat says, “I think he’s opened doors and done things on a platform that no one thought was possible.”
“What Jimmy does is exceptional,” Neistat says, “but he’s the guy with the flute walking through town like this, and behind him are 10,000 rats copying him. They might have good views, but none of them stand out.”
Neistat admits: “There’s two kinds of people in this world: Those who care about their views and liars. It hurts when you don’t get the views. But at the end of the day, at the end of the day, it is the work that you have to stand with.”
He also contrasts MrBeast’s approach with one he admires for a different reason: “You think Quentin Tarantino is like screen testing his movies to check which parts are the most engaging? Or do you think he’s just giving everyone the middle finger until the movie is done and then folds his arms and he’s like, release it like that, or never release it? Pretty sure it’s the latter. And that’s why we respect him. That’s why his movies you’ll watch over and over and over. That’s why his movies make you want to go buy a camera and create yourself.”
After all, he says, “The movies that we appreciate for their artistic contributions to the universe never align with the movies that the most people show up and pay for to see.”
This is where you can catch up and connect the dots. Between process and products. Between the ways we now create and consume content. This is an intimate space to glean coveted insight and interact with the people and products transforming the industry.
Evan Shapiro
From a keynote session on What’s Next for M&E with Media’s official Unofficial Cartographer Evan Shapiro to a conversation presented by the American Cinema Editors with the editor from Only Murders in the Building, the Insight Theater sets the stage for the trends and technologies you need to know.
Media cartographer and industry observer Evan Shapiro is set to deliver the keynote address at NAB Show New York. Known as media’s official unofficial cartographer for his visual charting of the industry’s continual evolution, Shapiro’s speech will center on “What’s Next” for Media & Entertainment. He’ll use this keynote to lay out what to expect in the next era of media, whether we’re ready for it or not.
Attendees can look forward to new research and insights, as well as Shapiro’s honest assessment of how the M&E industry can grapple with its next era. Preceded by remarks from NAB President and CEO Curtis LeGeyt, this keynote session is scheduled for Wednesday, October 25, at 10:30 a.m. on the Insight Theater stage.
A trio of industry pros discuss current technology trends pushing the boundaries of storytelling as a prelude to NAB Show New York.
October 16, 2023
Creators Are Brands — and Brands Mean Business
“Every creator online at this point considers themselves a brand. And when we think about brands, we think about running a business,” Bamboo founder and CEO Nick Urbom tells Lori H. Schwartz. Watch their full interview here or below.
Urbom and Schwartz spoke ahead of NAB Show New York to learn more about Bamboo and its plans for the Show. Watch the full video (below) to learn more about the company, social media trends, and the #EmpireStateofMind photo contest.
“Bamboo is a new social platform for creators to post collaboratively and to monetize without the platform taking a cut,” Urbom says. “We have formed this platform out of a lot of feedback from creators over the years looking for new tools that better meet their needs to run their businesses online.”
“To really, truly have a robust economy, we need a lot of different business models,” Urbom says. “And so we believe that the infrastructure of the internet and what has been built out over time with social media and tools that exist, that there is now a real driving need on the internet for creators who are making a living to have better access to ability to create their own products, set their own pricing, run their own businesses, and not be taxed, or, you know, have to split every dollar that they’re making with a middleman every step of the way.”
Online Trends
Urbom says, “We’ve had 15-20 years where people put out a lot of content for free; there’s been a lot of investment and building out into the ecosystem of the internet itself.”
“A couple of years ago, the stat that we saw around the creator economy was that it was comprised of 50 million creators online. And today, this statistic shows that there’s over 200 million creators,” Urbom says.
He explains, “There’s people that are making a living online in various different ways. And we see business models on multiple different platforms, most of which are driven around advertising. And then the current iteration that we’re seeing become more popular is around revenue shares.”
Another trend that’s key to understanding the state of the internet is the collab, or collaborative post.
“Collab is a term that you hear about all the time online,” Urbom says. “People are doing it on different platforms. And the way they’re typically doing it is by hashtagging, or by tagging other creators, and then you’re kind of going up against algorithms who actually sees what you are doing in a collab.”
Because of the popularity of this type of content, Urbom says, Bamboo was purpose built to facilitate collabs. While in beta testing, creators told the company “that it was very tricky on the other platforms to really collaborate with other creators.”
“We’ve built this from the ground up to support creators collaborating in a shared space, evolving, iterating and growing their brands,” he says.
The end result is that “on Bamboo, you can actually collaborate with another creator, you can create a unique feed on any topic. And you can actually brand the content that you want to create under that space.”
#EmpireStateofMind
Bamboo is doing a three-way collab of its own. The company is sponsoring the #EmpireStateofMind photo contest at NAB Show New York.
The winner will have the opportunity to do a collab with creator Avori Henderson, centered around an upcoming merchandise release. They will shoot the merch launch
Urbom says Henderson has “really made a splash in the world of online content creation by doing merch drops and doing collabs with other talented creators and designers and put together a really cool team around some of the different products that she’s released online.”
Henderson will attend NAB Show New York and meet the winner in advance of the collab.
Make Connections, Share Some S’mores at NAB Show New York’s Campfire Sessions
This meetup explores strategies for creators to monetize their online content and leverage social media platforms for collaborations and career building. Attendees will learn how to: create unique feeds; collaborate with others to co-host feeds; and tap suites of monetization mechanics.
Speakers: Bamboo Head of Marketing Dina Marovich, CEO Nick Urbom and founder/COO Hampus Wahlin
This meetup focuses on emerging technologies and creative use cases of current trends that inspire, challenge, and spur our creativity.
Speakers: Creative Producer and Visual Effects Supervisor Eric Alba, DotDotDash Executive Director of Technology Adam Paikowsky and Asher XR founder/CEO Christina Lee Storm
Join startups and pioneers to discuss what’s shaping the future of M&E. This conversation is for those looking to gain insights, forge partnerships or secure funding.
The “Empire State of Mind” photo contest is open now, with the winners to be announced at NAB Show New York by Bamboo’s Nick Urbom.
October 30, 2023
Posted
October 16, 2023
Video for Good Is Good for Video
TL;DR
Is video a force for social good? If the answer is “yes,” then how can we do more than use the content to push for change in our screen-centric world? Telly Awards Managing Director Amanda Needham has some thoughts.
Amanda Needham explains how video is essentially “endemic” to 21st century society, meaning “video is societal change because it mirrors all the things that we see on a daily basis.”
Specifically, the Telly Awards managing director says, “if you’re choosing to center certain voices or diversity or themes. It really is a pretty powerful medium through which to have conversations about the kind of values we want to have as a society and where we want to go.”
So how can we use content to push for change in our screen-centric world?
Making the Choices
“A deeper way to look at it is that how video is made and the intentionality behind what we put on screen also has an effect,” Needham explains.
That means considering “where you put your money, and who you hire on your crews, and where you source from.”
While some people may see these considerations as constraints, Needham disagrees. For her, this level of intentionality means that “going forward, we end up having better stories in general.”
People who have made this kind of content, she observes, “have done it in really interesting, progressive, forward ways. And a lot of the content on the screen is also talking about stories that matter.
“So our interpretation and our understanding of social change in this space is all the all the touch points that go into making content and and kind of proving to people that you can make incredible work, and to be intentional about the choices you make with your money and your crews.
“It’s also about the broader sort of ability for people to think holistically about social impact work when they’re making it, regardless of the type of work they’re making.”
She hopes attendees will leave both events thinking “about sustainability and social impact as a lens through which you see all things.”
“Our post-screening discussion is going to talk about what it takes to make film in the social impact space, behind the camera, in post, in production and then also on screen.
“So the content isn’t necessarily going to be like ‘how to save the trees.’ It’s going to be all the amazing work that is happening,” Needham concludes.
Our “official unofficial cartographer,” Shapiro will use this keynote to lay out the next era of media, whether we’re ready for it or not.
October 30, 2023
Posted
October 15, 2023
Cinematographer Sarah Whelden: What It Takes to Collaborate
TL;DR
Film school and on set aren’t the only places where you can learn key cinematography skills, says DP Sarah Whelden. She says she picked up many of her leadership competencies in managerial roles outside the industry.
Whelden believes that a DP’s role is to support the director and to help them execute their vision, while also making sure that the best ideas get heard on set. She emphasizes collaboration with her crew and believes that you can learn something from everyone, regardless of their experience level.
Learn about her experience shooting ‟Chispa” using Fujifilm’s just-released GFX 100 II.
Sarah Whelden may have gone to film school, but she credits stints at a bike store and a scoop shop with teaching her lessons that have been crucial to her work on set.
Hampshire College taught her about documentaries; retail taught her management skills. These jobs were “a big part of learning to become a better DP, which was that sort of foundation of learning how to lead people,” she says.
After film school, Whelden moved to Portland and worked in a variety of roles for a commercial production house, then she transitioned to freelancing and, eventually, shifted focus to narrative storytelling, which helped her make the jump to LA.
“It was an interesting sort of experience jumping into a new space and creating a new community,” Whelden recalls.
Of course, Whelden notes, her career of choice necessitated the ability to build a quick rapport with new colleagues and new places.
“Just working as a DP requires traveling to new cities all the time, and having to build that community quickly. You know, in some cases, especially in a commercial, where you may only be around these people for a few days, and you need to really build something with them,” Whelden says.
“But then in just transferring my home city, it takes on a whole new sense of that because you need that long term community. And not only on set, but also just to help support you and your career as you move forward.”
VISION AND COLLABORATION
“For me, working as a DP is really about supporting the director,” Whelden says.
But it’s important to note that she doesn’t emphasize pecking order on set. “We’re a family more than anything, you know, and I don’t think anybody’s beneath anybody,” she says. “The hierarchy is there just to make sure something gets done because somebody has to have the final say.”
“But at the end of the day, it’s about supporting the vision of others and trying to like, really understand what they’re looking for, what the story is calling for, because, you know, even though I’m here to support others, part of supporting others is sort of bringing my own voice to the table.”
And in order to fully contribute, Whelden has found that she must understand her own voice to determine whether a project is the right fit for her (or if she’s the right fit for the project and the director). As part of that process, she asks, “What is the story? And how do I be true and honest to that? And also, sort of who are the storytellers? And how do I hold their vision?”
Additionally, Whelden tries to assess “does this feel like the director has a collaborative approach? Because, you know, some directors don’t. Some, you know, have more of an auteur approach, which is wonderful, but I may not be the best person to carry that for them. Because really, my whole philosophy is just based around this collaboration. It’s about giving everyone a voice on set, and listening to that voice and knowing at the end of the day, the best idea wins.”
Once she commits, Whelden knows “it’s not just me, as a cinematographer [responsible for the story]. There’s so many department heads, production designers and costume designers and key makeup artists to lift my work. And then there’s also, like, so many crew members who are working to support me and all these other departments, you know, my G and E team, my camera team, the colorist, and we’re all sort of like, working together to find what makes the story best.”
To be successful, Whelden says, “I need to trust the director just as much as the director needs to trust me. And, you know, I need to trust my team just as much as my team needs to trust me. And I think we all need to sort of have each other’s backs.”
On set, “it’s about looking out for things that might come up and trying to nip them in the bud before they happen. As much as possible. When things do happen, you know, if anything happens within camera, or lighting, that’s on me, you know,” Whelden says. “I am the DP, I’m the one who’s sort of like, leading these two departments, and us if, if anything happens within there, that’s my responsibility that I have to take.”
To facilitate that trust, Whelden looks at hiring a crew like, well, casting a movie. She explains, “Part of that is thinking about dynamics, not just with me, but with each other. Knowing… who are my key players, and then often letting them sort of choose the folks that they typically work with.”
This is especially important because “once you get on set, there are going to be a lot of different personalities, not just within my departments, but within all departments.”
“At the very least,” Whelden says, “I am setting an intention for how I like to work within my department and within other departments so that people know, going into the shoot, how to collaborate … it’s as much them knowing how to collaborate with me, as me knowing how to collaborate with them.”
And it’s not just the personalities that will be varied on set. The crew will often represent all ages and experience levels and backgrounds, as well.
“There’s so many different cultures and generations, ideally, that are coming together on set. And I think the best work comes out of so many people, so many different life experiences, coming together to create something.” However, she says, “It can be challenging, again, to sort of figure out how to manage those.”
Whelden explains that an important factor is recognizing that you can still learn from those who have fewer years on their resume. “It’s learning to sort of lead, but also learn at the same time.” But as DP, she knows to keep in mind that “I know what the director wants better than anybody in my department, because I’ve had a lot more conversations with them.”
“CHISPA”
One of Whelden’s recent projects, “Chispa,” aptly illustrates the challenges and joys of collaboration on set.
For a variety of reasons, Whelden says, “it was a really tricky shoot.” First, “Chispa” required both color and black-and-white deliverables. Second, the director wanted a lot of visual effects. And the kicker? No one had ever worked with the camera before.
Fujifilm sponsored the film, and in exchange, Whelden says, “We agreed to shoot on their new camera, which just got released [in mid September 2023], GFX 100 II.”
To raise the stakes even higher, Whelden says, “I didn’t even get the camera until maybe 48 hours, 72 hours, before we started shooting.”
She explains, “A lot of the normal testing and this and that would normally go into it wasn’t possible. So I had to sort of increase that sort of trust I needed to have in the people around me; I needed to trust the people from Fujifilm, who were there to, you know, help make sure the camera stayed functioning or brought up on our backup camera if there were any issues. But I also had to trust, you know, my camera department to just be communicative.”
Whelden also says that Fujifilm put a lot of faith in her for this project.
“Part of the mission was to test [the GFX 100 II] and show it off. But … I was there for the director, again, to see her vision through. So how do I make sure I’m delivering what Fujifilm needs as far as this camera? To make sure we’re pushing it and trying different things and seeing …what its capabilities are,” Whelden says. “And where, maybe, there’s a shortcoming that they have a little time to, sort of, work on before officially launching it, and kind of pushing the limits. But also making sure that none of that was jeopardizing the creative vision [of the] director.”
Ultimately, she says, “There was a lot of communication that had to go into this. We had a great, incredible team. But it required so much planning.”
Aimed at still and video content creators, the exclusive-to-NAB Show New York Photo+Video Lab includes meetups, demos, workshops and more.
October 15, 2023
Live Events Have Become a Whole Cinematic Thing
From Beyoncé’s “Renaissance: A Film by Beyoncé”
TL;DR
It’s not enough to just go to a concert any more. People want to experience the event before they go and after they’ve been. The business of live is changing too.
For artists or brands like Beyonce, Harry Styles or Taylor Swift, as well as entertainment companies, business has become much broader than selling out a tour or a movie or merch.
Audiences seek compelling, profound experiences that allows them to have agency and authenticity.
Getting tickets to live events is more difficult than ever because the global market for live events is a monopoly run by Ticketmaster.
Despite widespread public criticism and political scrutiny of Ticketmaster parent company Live Nation Entertainment, solutions are not forthcoming.
Taylor Swift’s Eras Tour only began in March and is set to become the biggest tour of all time only a third of the way through its worldwide run, having already grossed over $2.2 billion in North America. According to Live Nation, Beyoncé’s Renaissance World Tour finished having earned north of half a billion at the box office.
Are these mega-star anomalies or is such success replicable?
“This feels like a cultural moment we’re living in,” says Adam Chitwood of TheWrap, joining a conversation that assessed trends in live entertainment.
“It’s not enough to just go to the concert. People wanted to experience the concert before they go and after they’ve been.”
Everyone agrees that the pandemic has influenced how we now view live events. The fact that for two years fans couldn’t go out has generated a pent-up desire (mania) to ensure that they do now that they can.
But it’s not just about live or music. The billion-dollar global box office takings of “Barbie” were in part propelled by audiences participating in the experience more than they would any standard movie — dressing up and attending multiple showings.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
We want to enjoy an experience in the company of others, including strangers.
“[Audience members] want to go with a bunch of friends, they want to buy the merch, and they want to participate in every way with their full energy,” confirms Levi Jackson, head of music marketing at WME (William Morris Endeavor). “Now they want a shared experience.”
For artists — or do we call them brands? — like Beyoncé, Harry Styles or Taylor Swift — as well as entertainment companies — business has become much broader than selling tickets or merch.
“We have all these different products and verticals that are involved with each actor or event,” explains Ross Gerber, Co-Founder, President and CEO of Gerber Kawasaki Wealth and Investment Management.
Jeff Clanagan, president of Hartbeat, Kevin Hart’s production company, notes that despite the higher cost of living, the demand for live experiences has rocketed: “Ticket prices have never been at this level. Fans are paying $300 to $1,000, you know, sometimes more depending on the artist.”
Compelling, Authentic Experiences
That Taylor Swift’s concert film is releasing into cinemas while her tour still has a year to run was never going to diminish the demand to see her live. “Absolutely zero chance it’s going to impact ticket sales,” says Clanagan. “There’s still a huge audience that might not have gone to that stadium to see Taylor or Beyoncé because of the ticket prices, but also people who went to the shows want to really have that experience in a theater. So it’s just another touch point for the consumer to share that experience.”
Not every artist can command this volume, however. Fri Forjindam, who leads global business development, branding and communications for Mycotoo, thinks that’s down to artist authenticity: “You can’t quantify an emotional connection that resonates with people. That means there’s a promise that’s being made. [The artist is] saying you’re going to get all of me, you’re gonna get my full catalog, you’re gonna get performance showmanship, tech, everything VIP. There’s an experiential overlay that is delivering on that promise, as opposed to just gouging.”
Rather than just blindly consuming anything an artist does, she thinks fans are extremely discerning. “They don’t want bulls**t. They want to come and have a compelling, profound experience that allows them to have agency and authenticity, and to see that in the things that they’re engaging with.”
Mycotoo has worked with the Studios on IPs from Netflix’ “Stranger Things” global tour to “The Mandalorian” to Prince’s Paisley Park to create experiences from theme parks to live events to brand activations.
Forjindam says the job is to leverage IP into an ecosystem that engenders loyalty. Whether it’s a concert, a museum or theme park, how do you take all those principles and turn it into a revenue based experience or entertainment destination?
boygenius performing “Not Strong Enough”
Leveraging Ideology and Mythology
One ingredient to success is understanding context. It’s vital, she says, “to have a shared emotional experience align with a brand and artists that reflects who they are in their ideology, in their consumer spending, in their way of life, in their sexuality in all the things that make you whole. It can’t just be about seeing the artists, there needs to be something deeper.”
For example, when working with Netflix on the “Stranger Things” global tour, the intent wasn’t to recreate the show, but to give fans a reason to get excited about the next season of the show on Netflix.
The goal was to “give them a physical place where they can commune with others and have this sort of ‘choose your own adventure’ [experience] and be the hero of their own story, using live performance. It’s redefining what live entertainment is first and then figuring out what the revenue verticals are to make it a viable business proposition.”
Ticketing Trauma and Technology to the Rescue?
It’s true that fans continue to have difficulties getting tickets to live events. The global market is pretty much a monopoly run by Ticketmaster and parent company Live Nation Entertainment received widespread public criticism and political scrutiny over blunders in selling tickets to the Eras Tour. There’s no easy answer.
Jackson says, “We’ve worked with a bunch of tours and talent, and we worked with every ticket company and I think the challenge is actually too complex for an individual artist to fix. Even for someone like Taylor Swift and Beyoncé. These companies are so big, you know, the contracts that they have, the tickets are difficult, but the technology of ticketing is actually so challenging.
From “Kevin Hart: Reality Check,” courtesy of Peacock
Gerber admits he can’t express his true feelings about Ticketmaster, “because it’s, controversially, you know, negative,” but suggests that an individual’s smartphone could be a better way of validating tickets.
Forjindam agrees: “We’re using technology to attempt to solve the climate crisis. We’re using technology for automated vehicles and smart cities and literally building Ukraine from the ground up. Why can’t we use technology to figure this [ticketing issue] out?
“How do we allow it to be able to maybe learn what someone’s user pattern is, or fandom level is, as a way to give them additional points to get ahead of the line because they are a legitimate fan, regardless of whether they can afford $1,500 or not.”
From “U2:UV Achtung Baby Live At Sphere”
Not coincidentally, there is a trend toward upgrading and building venues with new technology, not just giant LED screens but also better sound and lighting systems, to give fans a more immersive experience. The pinnacle of this right now is the Las Vegas Sphere.
“The Sphere is challenging artists to really think about that experience,” emphasizes Jackson. “Every show that’s going in there at the moment has to be bespoke to that venue. It’s making a unique experience as a destination at that venue — people are flying in from around the world to go to the Sphere.”
“In the end, this is about trying to make a connection with our audience,” says U2’s Bono. “it’s Las Vegas or bust, baby.”
October 30, 2023
Posted
October 15, 2023
How #GALSNGEAR Is Going to Get “Tequity”
TL;DR
Upskilling is crucial to everyone’s career, but Amy DeLouise notes that women are often part of “the leaky pipeline.” #GALSNGEAR aims to address that problem.
“Our focus right now is making sure that the [Tequity] Hub is a one stop shop for women and people who identify as women who want to upskill and reskill and propel their career to the next level,” DeLouise says.
#GALSNGEAR will host “Empowering ACCESS with AI” on October 26 at 4 p.m. at NAB Show New York. DeLouise says the session will explore how “AI [is] empowering us as creatives and as content developers.”
“I think anyone who’s in an industry that’s touched as much as ours is by innovation has got to be upskilling and reskilling constantly,” says #GALSNGEAR founder Amy DeLouise.
“Upskilling is a huge part of making sure that your career is moving forward in the direction you want it to move in,” she says.
DeLouise’s personal upskilling initiative involves “always reading things, I’m taking workshops, I’m attending events like NAB Show New York and NAB Show, to be sure that I’m on the leading edge of everything that’s happening.”
Upskilling is crucial to everyone’s career, but DeLouise notes that women are often part of “the leaky pipeline, where women might start out in our industry, but somehow they’re dropping out sort of early mid-career into other industries. So other people are getting them, we’re not keeping them.”
#GALSNGEAR aims to combat that. The organization, DeLouise says, “is a community promoting equity for women in media and entertainment. And we do that in a number of ways, with networking events with upskilling, and with speaking opportunities for women at industry events,” explains Amy DeLouise. “And we partner with manufacturers and broadcasters and leading organizations in the industry to make sure that we’re leading change by empowering women.”
Tequity
“Tequity” is a DeLouise original.
She explains, “I was talking to a lot of engineers at a particular event, and engineers like equations. And so I said, basically, ‘#GALSNGEAR is promoting an equation for tech equity. And what that is Access plus Training plus Visibility, multiplied by a Community supporting you, that equals Tequity.’”
#GALSNGEAR introduced its online Tequity Hub as a reskinned platform they debuted during the COVID-19 pandemic to deliver a virtual version of their annual leadership training workshop.
“Our focus right now is making sure that the hub is a one stop shop for women and people who identify as women who want to upskill and reskill and propel their career to the next level,” DeLouise says.
The hub hosts monthly Tequity Tuesday Talks, to which they invite M&E industry leaders to speak on a variety of subjects during a half-hour Q&A session. The event closes, DeLouise says, when they “break into sessions where people can either follow up with the speakers, they could see a demo of some new technology, or they could look around at the links that we’re starting to develop from our partners with a variety of upskilling workshops that they can take mostly for free in a variety of areas.”
October’s Tequity Tuesday Talk is scheduled for October 17 at noon (ET), featuring Adobe’s Alexis Van Hurkman discussing “How to Become a Beta Tester (and Why It Can Help Your Career).” Source Elements’ Rebekah Wilson will also provide a remote collaboration software, which members of the #GALSNGEAR community will help to beta test.
Beta Testing
“Beta testing is one of those areas that nobody really thinks about a lot,” DeLouise says. “And yet, if you’re using a camera, if you’re using an edit system, if you’re using an IP workflow, somebody has tested that software or hardware out, probably lots of people.”
What does beta testing involve? “Part of the job of a beta tester is actually to use things and try to break them. And let the engineers know what broke and what made it break, and then they can try to figure out how to fix it.”
Beta testing also has advantages for the tester. DeLouise says, “It really gives you insights into the leading edge of your industry. It gives you an advantage because when that tool comes out, you’re the first person who knows how to use it. And it also means that, you know, for people who are doing really in-depth types of beta testing, they might win awards; they get to write white papers; they get to be involved more at the industry level. And they get that visibility that we talked about in the equity equation that women are sometimes missing out on.”
#GALSNGEAR is launching its own beta testing program in concert with a few manufacturers (to be announced). The organization compares the company’s requested specs and then pairs appropriate community members for the program.
“Right now, this is a very curated program, DeLouise explains. “You could almost think of it as a kind of a one-to-one mentoring, if you will, program. So a lot of the companies come to us, and they say exactly what they need in a beta tester.”
NAB Show New York and Beyond
#GALSNGEAR will host a Connect session at NAB Show New York. Dubbed “Empowering ACCESS With AI.” Scheduled for October 26 at 4 p.m., DeLouise says the session will explore how “AI [is] empowering us as creatives and as content developers.”
It will feature “a really nifty demo from Carole Pigeard from Newsbridge and Sepi Motamed from NVIDIA. Both companies are utilizing AI and harnessing it in different ways to make people’s jobs and lives easier and more interesting. So I think that’s going to be a lot of fun, and we’re pairing it with a happy hour.”
Then, in January #GALSNGEAR will host another AI-centric session, this time for Tequity Tuesday. “The subtitle is ‘What’s AI Got To Do With It,” says DeLouise. “We’re gonna address, with a couple of sound mixers and sound designers, whether they are or not using AI enabled tools, and how that fits in their workflow.”
“I think there’s just a lot of tools that are out there, and we just have to learn how to harness them,” DeLouise says.
NAB Show New York invites you to participate in our campfire-style networking sessions, designed to facilitate connection and learning.
October 15, 2023
Taylor’s Version: How the “Eras Tour” Concert Film Could Change Cinema
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
TL;DR
Taylor Swift’s “The Eras Tour” is a nearly three hour concert film that’s receiving both critical acclaim and big box office dollars.
The studios are reportedly less than pleased about the distribution deal Swift struck with AMC Theatres. (Some might even say there’s a likelihood of bad blood between Swift and the studios…)
Concert films must strike a balance between recreating the concert-going experience and improving upon it, giving attendees the sense that they both had the best seat in the house and an additive, cinematic experience.
Willman says Swift’s performance in the concert film is “2 hours and 45 minutes of nearly nonstop acting, writ large for the back row of SoFi Stadium and, now, Imax and Dolby.”
The concert film “will be playing AMC, Regal and Cinemark theaters this fall. AMC also is releasing Taylor Swift: The Eras Tour on its own in what is a first distribution initiative for the circuit, however, it has tapped Variance Films to book the title. The pic will also be booked at Cineplex theatres in Canada and Cineopolis in Mexico,” according to Deadline.
Swift’s film “set a single-day domestic record for AMC after selling $26 million in less than 24 hours, besting the all-time benchmark previously held by “Spider-Man: No Way Home” ($16.9 million),“ Variety reports.
“The world’s biggest music star teaming with the world’s biggest theater company on a film that could play for months is as close to a no-brainer as exists in entertainment,” Puck’s Matthew Belloni writes about the AMC deal. (Also a no-brainer, Belloni reports, was Universal’s decision, under duress, to change “The Exorcist: Believer’s” release date from the targeted Friday the 13th that would have competed directly with Swift’s movie.)
“For traditional studios, ‘The Eras Tour’ might be the most profound example of the money that’s been left on the table because of dragged-out negotiations with the Writers Guild of America and the Screen Actors Guild,” David Sims writes in The Atlantic.
“Perhaps it feels like a stretch to claim that concert films will be what saves cinema, but with Hollywood running on fumes, it’s much more possible for their movies to have an impact—or at least for the large impact they would have no matter what to seem like the only thing happening at the multiplex,” writes Wired’s Angela Watercutter. “And not for nothing, but finance types are literally out here claiming these two artists [Taylor Swift and Beyoncé] boosted the US gross domestic product in the third quarter of this year.”
The Making of an “Eras” Experience
The creators behind the film deserve massive praise.
Willman writes, “A team of five editors gets credit for assembling all this work in such a hurry following a shoot at her U.S. tour finale in L.A. just two months ago. But the movie reflects the ethos Wrench has showed off in other concert films, like the recent ‘Billie Eilish Live at the O2,’ in not cutting just to create excitement where it already exists.”
In terms of the visuals, Willman observes, “A healthy balance is struck between knowing there’s a hell of a lot to take in in this stage production and knowing that the thing we most want to take in is Swift herself.”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
Swift herself gave the creators an enthusiastic cosign during the surprise early premieres at AMC Theaters in the Grove on Thursday. THR’s Kristen Chuba reports that Swift told audiences, “It is the perfect capture of what this show was like for me.”
She also told attendees, “I think that you’ll see that you’re absolutely a main character in the film, because it was your magic and your attention to detail and your sense of humor and the ways that you lean into what I’m doing and the music I create.”
Per Calum Marsh at the New York Times, “Filmed over three nights in August at SoFi Stadium in Inglewood, Calif., and directed by Sam Wrench, ‘The Eras Tour,’ like most concert films, aims to capture some of the magic of seeing the artist perform live.”
Despite this very tough assignment, “Apart from maybe those digital title superimpositions, you’d be hard-pressed to point to any wrong moves Wrench makes in transferring the show from stage to screen,” Willam writes. He’s unconvinced that the announcements of the different eras (AKA albums) was truly additive for most viewers.
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
“The demands of a film are incredibly complicated, and faithfully reproducing the look and sound of a concert for the screen is an arduous and painstaking process for the filmmakers and their crews,” writes Calum Marsh.
“You can make [the concert film] an equally good experience, but it has to be a filmic or cinematic experience rather than trying to compete with the live experience,” film and music video director Jonas Akerlund told Marsh.
Akerlund explained that time and money and opportunity are the secret to making a concert film cinematic, followed by editing “with the precision of a four-minute music video.”
Part of that is ensuring the audio is superb — maybe even better than it was live. “The main thing we’re trying to do is provide the theatrical audience with the best seat in the house,” John Ross, the rerecording mixer on “The Eras Tour,” explains.
Variety’s Willman argues that the film achieves this: It “magnifies all of it, in a next-best-thing-to-being-there way (even though no one who missed it will completely shed their FOMO).”
To achieve this, Marsh says, “the tone of the room is essentially applied like a filter to the raw sounds recorded from the artist onstage. This filter, known as impulse response, takes readings from actual physical places, then ‘synthetically reproduces the sound of a real space like a club or stadium,’ said Jake Davis, the lead mix engineer at SeisMic Sound, an audio facility in Nashville that specializes in concert films.”
From Taylor Swift’s film “TAYLOR SWIFT | THE ERAS TOUR”
Mixers are also tasked with making “ the concert film sound more like what the artist wanted than what necessarily occurred on the night it was filmed,” Marsh says.
Precision – but not perfection – is the name of the game for concert films. Marsh writes, “It’s a bit like touching up a portrait in Photoshop: it’s tempting to clear blemishes, but too much airbrushing can make you look fake.”
Despite all of the behind-the-scenes effort, Marsh writes, “The best concert films will, to the unwitting viewer, seem like nothing more than filmed concerts — the filmmaking itself remains invisible.”
For artists like Beyonce or Taylor Swift, the business of live entertainment has become much broader than simply selling out a tour.
October 11, 2023
Audience Expansion Is About More Than Content for Peacock’s Quincy Olatunde
From “Kevin Hart: Reality Check,” courtesy of Peacock
TL;DR
Streaming Media Connect 2023 featured Quincy Olatunde, NBCUniversal/ PeacockTV VP of D2C products, in conversation with Evan Shapiro.
Olatunde discussed Peacock’s planned expansion into 50 different African markets through a partnership with Multichoice, emphasizing the need for localization and community building.
He also discussed how to effectively leverage creators to build fandoms and ensure that communities are represented.
“I’ve never seen any company do what NBCUniversal does, when it comes to simulating events or live events,” says Quincy Olatunde, NBCUniversal/ PeacockTV vice president of D2C products. “They just know how to put the show on.”
Olatunde expressed confidence in NBCUniversal’s ability to put together impressive live shows, while emphasizing that his eye is on Peacock’s technological capabilities. “You can have the best time in the world, if you have a stack to deliver that,” he says.
Streaming Media Connect 2023 featured Olatunde in conversation with media cartographer Evan Shapiro. The two discussed globalization and localization, changing media consumption and building fandoms. Watch their full keynote, below.
“I’m excited about localization,” he says. More specifically, he seems excited about localization’s potential: “The education that that brings along with it, you know, like really opening up the world and the more you understand about communities about cultures and groups, the more tolerant you become, right, or tolerant we become, the better our laws are.”
Peacock’s Expansion
Shapiro often emphasizes that the future of M&E doesn’t lie solely in North America, with many opportunities emerging in the global South. Peacock appears to be one of the companies with its eye on the ball.
“Africa is great… it’s a market that’s growing up with this technology of streaming, right?” Olatunde says. “The behaviors are changing.”
Olatunde explains that the streamer is partnering with Multichoice to bring Peacock to 50 markets in Africa in order to capitalize on this shift. “Peacock’s stack is going to be powered on” Multichoice’s footprint, offering the technology to complement their reach and their content, Olatunde says.
“It’s all about technology advancement, right? That helps us push forward, but you can’t just wait until that happens, right? You’ve got to be in the play. You’ve got to be part of the community, right? You’ve got to invest in the local market. You have to invest in local content.”
With all of NBCUniversal’s great properties, why is it crucial to emphasize local content?
Also, Olatunde explains, “Consumption has changed. What’s important to [viewers] has changed.”
He adds, “And what’s important for them is inclusivity. Right? What’s important to them is the consciousness of the company to deal with, right, what they put their money behind, what they buy into. And we have to understand that.”
Ask yourself, Olatunde says, “The people behind the screens — how diverse that is? What’s the story you’re telling? How inclusive is it?”
“We’ve got to understand them for us … to build a relationship with them,” Olatunde says. “I look at it from the perspective of ‘How can I build a relationship with this person?’ It’s creating fandoms out of them.”
Why are fandoms important? “As media, we’re not just competing with ourselves,” Olatunde says, echoing another of Shapiro’s regular points. There are only 24 hours in a day, and consumers must choose how they spend their time, whether that be with TV, video games, books, or other hobbies.
Leveraging the Creator Economy
Part of creating fandoms in 2023, Olatunde says, is having “a continuous relationship” with consumers. One of the best ways to be with “fans [is] through these content creators.”
“I do feel like the sports department of all things seems to really lean into the creator economy in a pretty interesting way,” Shapiro commented.
After all, “brands have people in it. And we love the brands we work with,” he says. That also means companies should not “try to dictate how they do the creative” because creators tell him, “let me do what I do best.”
Once you’ve established your brand “as part of that community, they’re willing to pay premium for it,” Olatunde explains.
“If you have consistency, you’re part of the community. You don’t wait until, when, you have a big show to produce,” he says. “We’re always in that world.”
Part of doing the market expansion properly requires thinking “almost like an r&d department.”
And that also means, if you want to get into a new market, Olatunde says, “You fund it properly. It’s not just… [a] cost center, but a revenue potential.”
Whether working with distribution partners or creators, Olatunde says, “I think at the end of the day, it’s all about ‘How can we collaborate?’ And how can we partner better, but still be competitive in a healthy way. So we can give our consumers… a better experience.”
AI Is “Dumb” and Humans Wield the Worrying Power (Say MIT and Stanford Researchers)
TL;DR
As AI continues to transform how virtually every industry conducts its business, Mehran Sahami, computer science professor and chair at Stanford University, predicts the growing technology will eventually translate into “some labor displacement” within entertainment companies.
Sahami also noted that displacement of labor will be “uneven” across industries as key leaders mull over how to most productively utilize artificial intelligence to enhance their business strategy.
Daniela Rus, who serves as director for MIT’s computer science and AI laboratory, argued for regulation and expressed a worry about inherent bias in data sets.
It is not AI we should be worried about per se, but the humans who work with the technology. That holds true for those with one hand on a nuclear button as well as for big business looking out for their bottom line. This was the argument made by a pair of AI experts in a conversation hosted at TheWrap’s annual conference, TheGrill. Watch the full session above.
“AI is just a technology [about] which you should not be necessarily terrified, but [you should be] concerned about who wields the power of AI,” said Mehran Sahami, computer science professor and chair at Stanford University.
He instanced a recent chat with someone in the publishing industry who cancelled hiring six new jobs because their existing staff could take on that same work using AI technology.
“It was not AI that makes the decision whether or not the jobs exist or not, it’s human beings that make that decision,” said Sahami. “So what AI enables is more possibilities, [and] one of those possibilities that it creates is job displacement. But people ultimately make that decision. This is going to shift the economic landscape, but the decisions are still ours.”
Daniela Rus, director for MIT’s computer science and AI laboratory, characterized some humans wielding the power tools of AI as “supervillains.”
“AI can give us a lot of benefits, but the same benefits can empower supervillains,” she said.
They both argued sought to reassure those are might be terrified of AI. “They’re not that powerful,” she said. “They cannot replace human creativity. They are not our equals, they could be our assistants, they could empower us to do more with what we can do already, they can help us be more productive. They can help us with knowledge, they can help us with insight, but the tools themselves are kind of simple tools that work on based on statistical rules.”
AI is good at some tasks — especially the language and computer vision components that are empowering us to do more but in terms of dominating human creativity, we are not there, she said.
“What I believe is that humans and machines can work together to empower the humans. So we have to find ways for AI to support the production of movies,” she continued.
“With AI you can help with some routine tasks like fixing color across the film or anticipating different types of storylines, that people could then evaluate. You can help with error correction as you generate the video. But all of these are really routine tasks. They’re really not where the creative element sits. At best, they could generate maybe B or C level scripts, but they cannot generate the kind of stories that capture the important aspects of the human condition, or provide political commentaries.
“I cannot imagine the Barbie script being generated by a machine,” she said. “Maybe an individual character could be shaped but the whole story is a is necessarily a human story.”
Sahami made the point that it doesn’t matter whether an AI can generate empathy or emotion when it comes to the creative arts since these are attributes that we each bring with us to the experience.
“One thing that AI is getting much better at is basically having an interaction that we ascribe meaning to. Generative AI generates words and pictures, which are exactly the things we give meaning to. So it can certainly evoke emotional responses from human beings, because we’re the ones who create that.”
He agrees, however, that AI can be manipulated (prompted) to generate outputs that evoke particular responses which may be designed to subvert the truth.
“That’s what misinformation is all about. How do I get people angry enough that they vote for the person I want them to vote for? What are the guardrails we put around AI that allows us to know that the emotions that are being generated [within us] are being generated by this thing,” he says.
“When we go to a movie, we don’t come in there thinking, Oh, I’m just gonna sit here and have no emotional reaction. We want an emotional reaction. But at the same time, we realize that it’s fake, that it’s a movie. We don’t necessarily realize that when we read something on social media. So those are the places where we [need] to have some indication [of AI involvement].”
Regulation to Watch the Watchmen
Much of the discussion was taken over to pondering whether and to what extent regulation of AI was needed, not least to guard against the issue of inherent bias in the data on which an algorithm is trained.
“We’re worried about bias,” said Rus, who serves as one of the US representatives in a group titled Global Partnerships in AI. “The research community is not giving up. In fact, there is a very energized movement to align [data/AI] to think about human values, and ensure the algorithms that get deployed are aligned with human values.”
She noted that AI driven facial recognition is likely to have been trained either mostly on “white blond faces,” so the system is going to produce mostly a white blond outcome or more nefariously trained to accentuate differences from that “norm” so it would actively highlight people with different colored skin.
But she said, “You can mathematically rebalance the data. You can mathematically evaluate whether the system as a whole has bias. And then you can fix it, using mathematics [which is] readily available now.”
Sahami argued for comprehensive federal privacy legislation to regulate AI “because at the end of the day, it’s a question of who chooses, and who chooses right now is a small group of executives at some very large companies.”
He likes the idea suggested by Sam Altman, head of OpenAI, of having a sort of AI equivalent to the Atomic Energy Agency to act as a buffer.
“You need to think about risk mitigation, who has the power over these tools? How do you actually put guardrails and inspections around these things so that they don’t get used for purposes that they weren’t intended,” he said.
Rus argued for a “delicate balance” between over-regulation and stifling innovation. “I think it’s important to find a good balance that allows innovation to continue,” she said. “Especially for [the US], we are leaders in the space and if we over regulate we may lose our leadership. But at the same time, AI deployments have to be safe, they have to be carefully done. I believe that we really need to ensure consumer confidence in the safety of the output of the system.
“I don’t really mind if my AI personal system is not fully tested or makes mistakes, if the task is to, to label my vacation photos, but if the task is to do something like deciding who gets hired in a company or who gets convicted, then then we really have to be thoughtful.”
Machines may not be up to speed when it comes to matching human creativity, but what about down the line? AI is improving at such a rate, surely it is only a matter of time before jobs are lost because of it.
Sahami believes there will be “labor displacement” in entertainment “and it’s going to be uneven.”
He said, “It’s true that human creativity is not replaceable, in some sense. But human creativity can be augmented.”
He gives a simple example. When people say that AI doesn’t generate anything that hasn’t already been generated, they disregard the fact that most themes in entertainment are regurgitated in some form.
George Lucas famously leaned on Joseph Campbell’s book The Hero With a Thousand Faces to join together classic storytelling tropes into the mythology of Star Wars.
The same is arguably true of all art, painted, filmed, written or played — it leans on the shoulders of giants.
“There’s these universal themes that come up over and over, but they have variations,” said Sahami. “Basically, they’re an amalgam of a bunch of different ideas, and AI can potentially do that. That doesn’t mean it’s necessarily going to generate the next great script. But it could generate ideas that empower a smaller group of people to generate the next great script. And then that becomes a question for a studio executives, are you going to have more people in the room or fewer people with a bunch of power tools — that essentially is a human decision at the end of the day.”
Interested in how artificial intelligence will impact technology, business, and creativity? (How can you not be?) Ride the wave into the future of Media & Entertainment, where curiosity meets innovation meets storytelling, with NAB Amplify’s dedicated resource exploring the transformative force of AI. Dive into explainers that demystify complex concepts, discover real-world applications in Hollywood, and glimpse the road ahead. Aimed at industry professionals working at all stages of the content lifecycle, these resources are your gateway to understanding how AI is reshaping the entertainment landscape. Join us, and let’s Amplify the conversation!
Georgia Tech sociologist John P. Nelson explains how countless hidden people contribute to the magic of ChatGPT and other language AIs.
October 6, 2023
Step Inside: The Spectacle of “U2:UV Achtung Baby Live At Sphere”
From “U2:UV Achtung Baby Live At Sphere”
TL;DR
U2’s residency at Las Vegas’ Sphere opened September 29 and is a multisensory experience that seems to be living up to the critical hype of the new venue.
MSG Entertainment built its own camera system and a post workflow in order to accommodate its 16K by 16K screen. The company also designed a custom media recorder to capture all that data.And Sphere’s experience wouldn’t be complete without a tailored audio set up: a164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience.
Artist Marco Brambilla’s “King Size” video is also playing during U2’s performance of “Even Better Than the Real Thing.” He used AI to make the highest resolution video collage of all time, dedicated to rockstar Elvis Presley.
In addition to “U2:UV Achtung Baby Live at Sphere,” the megadome features a short film by director Darren Aronofsky and its Exosphere is covered in a Refik Anadol installation.
U2 debuted its show, “U2:UV Achtung Baby Live at Sphere” September 29 and, writes Rolling Stone’s Andy Greene, “The Sphere somehow managed to live up to years of hype with its dazzling 16K resolution screen that transported 18,600 fans from the stars in the night sky to a surreal collage of Vegas images, the arid deserts of Nevada, and the information overload of Zoo TV.
“And the sound wasn’t the sludgy, sonic assault you typically get at an arena or stadium concert. It is clear, crisp, and pristine, making earplugs completely unnecessary. As advertised, this was a quantum leap forward for concerts.”
U2 Guitarist the Edge told Wired’s Steven Levy that the sound quality is a step above because Sphere ‟was designed with audio in mind, whereas most of the venues we end up playing … were designed primarily for sports, where the sound is a very kind of low priority. It’s really paid off.”
Learn more about U2’s residency here, and watch the band’s performance of “The Fly” from the show here.
From U2’s video for “Atomic City,” courtesy of U2
Sphere is described by Deluxe SVP of Innovation Richard Welsh as ‟probably the biggest, most elaborate manifestation of all of the technologies that you might experience in a cinema-like environment now, translated to a huge, huge space.”
For more on the technology that enables Sphere, you can watch our video of Welsh in conversation with Eric Cantrell and Roman Sick.
Developers of the 366-foot-tall, 516-foot-wide dome are aiming to reinvent every aspect of the live event experience, and Sphere is the culmination of seven years of work, with a budget that reportedly stretched to more than $2.3 billion (making “it the most expensive entertainment venue in Las Vegas history, beating out the $1.9 billion Allegiant Stadium,” per The Impossible Build.)
Inside, that translates to a venue that can seat 17,600 people; and 10,000 of them will be in specially designed chairs with built-in haptics and variable amplitudes: Each seat is essentially a low-frequency speaker. There’s also the option to shoot temperature- and direction-controlled (or scented!) air at fans.
In addition to the massive dome and custom seating, MSG Entertainment built its own camera system and a whole post-production workflow, which together comprise a system it calls Big Sky, in order to accommodate its 16K by 16K screen, believed to be the world’s highest resolution. Because this screen “covers almost the entire interior’s curved walls and roof,” The Impossible Build YouTube channel predicts it will be like stepping inside “a real life metaverse.”
The Big Sky single lens camera boasts a t316-megapixel sensor capable of a 40x resolution increase over 4K cameras. It can capture content up to 120 frames per second at the 18K square format, and even higher speed frame rates at lower resolutions.
These special cameras also require custom lenses to do the job, including a 150-degree field of view, which is true to the view of the sphere where the content will be projected, and a 165-degree field of view which is designed for overshoot and stabilization and is particularly useful in filming situations where the camera is in rapid motion or in a helicopter.
Additionally, MSG Entertainment designed a custom media recorder to capture all that data including uncompressed RAW footage at 30 Gigabytes per second with each media magazine containing 32 terabytes and holding approximately 17 minutes of footage.
Sphere’s immersive sphere-ience, err, experience wouldn’t be complete without a tailored audio set up. The 164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience was designed by German start-up Holoplot. (That means certain audience sections could listen to a movie in different languages, or even different instruments.)
From U2’s “U2:UV Achtung Baby Live at Sphere” promo
Make Way for the King
“We’re pushing the envelope in the visual area as far as you can. All the artists we worked with have given us incredible material that we feel really connects with our music,” the Edge told Wired. “But in the end, the songs dictate what we put on the screen and what we do as a band in performance.”
For example, artist Marco Brambilla’s “King Size” video is playing at Sphere now through December, during U2’s performance of “Even Better Than the Real Thing.”
Brambilla used AI to make the highest resolution video collage of all time, dedicated to rockstar Elvis Presley. The 16K resolution, AI-generated immersive artwork for the venue’s opening celebrates the king of rock’n’roll with glorious, exaggerated excess.
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
To create “King Size,” Brambilla started out by feeding over 12,000 film samples of Presley’s performances including the 33 movies he starred in into Stable Diffusion. This allowed him to catalog hours of footage and select what he needed with ease. He also utilized a combination of Stable Diffusion and Midjourney to generate “fantastical versions of Elvis” for the short.
“I knew it was going to be something epic and iconic, and I knew it would be odd at the highest level,” Zane Lowe told Apple Music. (Watch the full interview, above.) But Lowe added, “What I didn’t realize though, was that it was going to create such a profound feeling.”
Aronofsky and Anadol
What other content can be seen at Sphere in its debut year?
Director Darren Aronofsky was commissioned to shoot “Postcard From Earth,” the first piece of cinematic content for the Sphere, with the Big Sky camera wielded by Oscar-nominated cinematographer Matthew Libatique. It’s featured now as the marquee show for The Sphere Experience.
“At its best, cinema is an immersive medium that transports the audience out of their regular life, whether that’s into fantasy and escapism, another place and time, or another person’s subjective experience. The Sphere is an attempt to dial up that immersion,” Aronofsky tells Carolyn Giardina.
In another article for The Hollywood Reporter, Aronofsky explained, ‟I wanted to shoot macro shots [for ‘Postcard’] because to present them in 18K to audiences with that level of detail would be something no one’s ever seen before.” (One end result of this aim: jumping spiders. You were warned!)
All told, these intensive captures resulted in ‟a whopping half-petabyte of data,” per THR.
As Wired’s Steven Levy put it: ‟While U2 used the Sphere to create a genuine concert, ‘Postcard’ is more of a mind-stretching theme-park attraction.”
The Sphere’s interior isn’t the only way to experience groundbreaking content; viewable from parts of the Strip, the Exosphere currently displays an “AI data sculpture” created by Refik Anadol and dubbed “Machine Hallucinations: Sphere.”
From Refik Anadol’s “Machine Hallucinations: Sphere”
The Exosphere is covered with nearly 580,000 square feet of fully programmable LED paneling, creating the largest LED screen in the world. The paneling consists of approximately 1.2 million LED pucks, spaced eight inches apart. Each puck contains 48 individual LED diodes, with each diode capable of displaying 256 million different colors.
With the Sphere, Jesus Diaz writes for Fast Company, “the building is the canvas — a bland engineering marvel that transforms into something visually arresting once Anadol gets his hands on it.”
Artist Refik Anadol employs machine learning algorithms to create an immersive digital experience projected onto the Las Vegas Sphere.
October 6, 2023
Elvis Has Definitely Not Left the Building: Marco Brambilla’s AI-Generated “King Size”
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
TL;DR
Artist and filmmaker Marco Brambilla used AI to make the highest-resolution video collage of all time, dedicated to rock star Elvis Presley. The work debuts at the Sphere, a new venue for immersive entertainment in Las Vegas, as part of a new limited residency by U2.
With a storyline about the gradual growth and collapse of an icon, Brambilla employed what he calls “the language of excess” for “The King,” inspired by Hollywood spectacle and the work of painters Hieronymus Bosch and Pieter Bruegel.
Brambilla calls AI “a blunt instrument” that helps with references and inspirations, but it doesn’t really create intention. “That’s still the artist’s department. For now.”
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
A 16K resolution AI-generated immersive artwork for the opening of Sphere in Las Vegas celebrates the “king of rock’n’roll” with suitably exaggerated excess.
The latest Elvis extravaganza is the work of Italian artist Marco Brambilla. He was commissioned to create the largest video collage ever to fit the giant Sphere screen and to display during U2’s inaugural residency at the $2.3 billion entertainment complex.
The four-minute, hyper-detailed, image-dense video depicts Presley in different incarnations from young army officer to swaggering movie star to bloated has-been — as well as Vegas itself, “which somewhat similarly evolves from a small desert oasis into the neon epicenter of debacle-spectacle,” writes Jori Finkel at The Art Newspaper.
“I’m using the language of excess,” Brambilla told Finkel. “I wanted it to be a spectacle in the tradition of Hollywood, Busby Berkeley and Irwin Allen. It’s really over the top.”
“The storyline is really the gradual growth and collapse of an icon and also how Las Vegas went from being a desert to a glamorous destination to a mega Disneyland,” Brambilla told Jo Lawson-Tancred at Artnet. “Those two hyperboles seemed very well connected to me.”
“I’ve always been inspired by Bruegel and Bosch and this idea of multiple storylines existing in the same frame, but with video.”
How He Made It
Given that he had only four months to make it, which is much less time than it has taken to create his previous video artworks, Brambilla turned to AI to help out.
“The AI allowed me to work much faster in finding the material I wanted. The process became a kind of stream-of-consciousness exercise between myself and the AI model,” he explains to Lea Zeitoun at designboom.
He started out by feeding over 12,000 film samples of Presley’s performances, including the 33 movies he starred in, into Stable Diffusion. This allowed Brambilla to catalog hours of footage and select what he needed with ease. For example, he could simply search his dataset for “crowd in a concert,” and the AI model would pull all of the related clips up immediately for his sampling.
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
He then used both Stable Diffusion and Midjourney, another AI model, along with his own text prompts to generate the fantastical versions of Elvis: an Elvis rising from a casino table out of piles of coins; a surrealist Salvador Dali-style guitar; a statue of the singer’s head modeled off the stainless steel sculptures at Rockefeller Center. He also “revamped some looks from Elvis (2022), the biopic by Baz Luhrmann, who shares something of the artist’s aesthetics of excess,” per Finkel.
One prompt was “What would Elvis look like if he were sculpted by the artist who made the Statue of Liberty?”
Another was “Elvis Presley in attire inspired by the extravagance of ancient Egypt and fabled lost civilizations in a blissful state. Encircling him, a brigade of Las Vegas sorceresses, twisted and warped mid-chant, reflect the influence of Damien Hirst and Andrei Riabovitchev, creating an atmosphere of otherworldly realism, mirroring the decadence and illusion of consumption.”
The artist also used CGI to edit the samples and inject more details into the video collage, collaborating with a post-production studio in Paris.
After stitching together all of these images, Brambilla ran tests to make sure the video did not feel dizzying. He switched from a vortex-like format, which he found to be too intense, to a scrolling model much like how we view content on phones. He also slowed down a number of the clips so they would be easier to digest at this scale.
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
“It’s like looking through a window. If the work doesn’t cut too quickly or move too fast, it’s actually quite soothing,” he says. He described the Sphere’s screen, consisting of thousands of LEDs, as more membrane than wall. “There’s no feel of architecture when you’re inside. This is the first time I’ve seen something that is impossible to replicate at home or in a conventional movie theatre.”
By the end, Elvis is represented by a monument that towers over the video’s frenetic activity. “It’s almost like we’re in Elvis’s head,” he tells Artnet. “It’s his own memory of Vegas, of how it started. It’s a very subjective point of view so it’s all the neurons firing and everything coming together.”
AI: Tool or Collaborator?
What are Brambilla’s thoughts on using AI as a tool or a collaborator?
“I see it more as a tool at this point, he tells designboom, [but] I assume it may become more of a two-way dialogue at some point. Technically, this project is more of a hybrid – it uses the collage technique combined with AI and computer graphics to create a more seamless ‘Canvas’. The process of making it was also informed by the ‘Collaboration’ with the AI tool, which often led to unexpected associations that found their way into the work.’”
He reports that only about 20% of the output images actually looked like Elvis. “But some really interesting accidents came out of it,” he told TIME. “It was a stream-of-consciousness experiment: You’re working with a tool prompting you to make associations you wouldn’t have made. AI can exaggerate with no end; there’s no limit to the density or production value.”
Brambilla continued this line of thought with Finkel: “What I found is that it was very good at sketching, making conceptual sketches and hybrid images. It often comes up with things that are very magical.”
He found that AI was a huge help in speeding up the process of ideation. “What it doesn’t do well,” he told Artnet News, “is make an output that’s really specific. It fights backs, so you never quite get the exact result but you get options. What I chose to do is take these imaginations and use them as a sketch for CGI artist to modify.”
He added, “AI is a blunt instrument that helps you get references and inspirations, but it doesn’t really create intention. That’s still our department. For now.”
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
King Size will play during U2 concerts at the Sphere (29 September-16 December, appropriately enough, during the group’s 1991 hit, “Even Better than the Real Thing”; Brambilla then plans to show the work at his Berlin gallery, Michael Fuchs Galerie, using an Elvis-inspired soundtrack.
“In the end, this is about trying to make a connection with our audience,” says U2’s Bono. “it’s Las Vegas or bust, baby.”
October 4, 2023
How M&E and AI Are (Cautiously, Conscientously) Making It Work
TL;DR
A panel discussion focused on practical use cases around leveraging AI as key to relieving some of the potential fear associated with the technology.
Moderated by SMPTE President Renard Jenkins, the panel featured Lewis Smithingham, SVP of Innovation & Creative Solutions at Media.Monks; Maria Ingold, Strategy & Innovation CTO at mireality; Quincy Olatunde, VP, Products, Direct-to-Consumer at Peacock; and Samira Bakhtiar, Director of Global Media & Entertainment at Amazon Web Services.
We will move from an era of monoculture to one which is hyper-personalized with content, brands and other content tailored to our true identity, the panelists agreed.
Ethically sourced training data and the advantages and risks of working with open source models were also discussed.
A few years ago, when the industry first began to take serious notice of AI, the technology was really an application of machine learning.
The progression and development of the tools and applications since then has been phenomenal, and because of that, says SMPTE President Renard Jenkins, the M&E industry is continuing to find its way through this decade of massive change.
Jenkins was speaking as the moderator of a panel discussion at the IBC trade show in Amsterdam. (The full session, “How to Approach AI and Gain a Competitive Edge,” can be viewed here.)
Amazon has been engaged in machine learning and artificial intelligence for more than two decades, and its practical applicability in the media entertainment space is very important.
Samira Bakhtiar, director of global Media & Entertainment for Amazon Web Services, said: “There’s not going to be one foundational model to rule them all. The ability to leverage APIs to access foundational models, as well as to leverage open source solutions, is going to be paramount.”
She mentioned practical use cases for AI in media and entertainment. One is being able to take archival footage that may not have been shot with a purpose-built slow-motion camera and, by running it through an open source AI model like Llama 2, generate a slo-mo video version of the footage. Being able to do things like super resolution, using an algorithm to up-res archival content to 4K, is another application.
Maria Ingold, Strategy & Innovation CTO at Mireality, suggested that we need to combine subject matter expertise with AI.
“It’s about taking the strength of our creativity and the precision of technology, bringing that together and creating something that reflects what’s happening with society and our value-driven ideas,” she said. “I think what is absolutely essential in order for us to be successful [is to bring] ourselves as subject matter experts to the AI. This is about the human machine collaboration, in order to solve things together.”
Data and Personalization
Quincy Olatunde, VP of products in the Direct-to-Consumer division at Peacock TV/NBC Universal, focused on the importance of gaining an edge in AI by ethically sourcing your data.
“Performance comes down to the quality of the data and how it is sourced. Larger businesses should be aware of the potential risks they may expose themselves to, while ethical responsibility also remains a big factor.”
On this point, Bakhtiar said AWS works hard to ensure that all its customer’s data is secure. “When training a foundational model we create a copy so your models are completely trained with your own data. You can create a virtual private cloud environment around that to ensure its privacy and security as well. We want to ensure that your data is yours. It’s protected, it’s proprietary. It’s your IP; you should be the one that has control over it.”
Lewis Smithingham, SVP of Innovation & Creative Solutions at Media.Monks, chose to emphasize the vast potential of AI to automate and personalize media. He described current the state of AI as “the worst it will ever be,” comparing it to “an affectionate and obedient hamster” with a gambling problem, but also said that AI would kill off monoculture and birth new “microcultures and subcultures that allow people to personalize their content to the greatest scale.”
The level of personalization that AI can churn out, he said, will enable brands and broadcasters “to actually engage with personalization and create completely new channels” based on self-identity, not demographics.
Bakhtiar agreed, saying that if organizations are looking to gain a competitive edge they need to think about how to leverage the petabytes and petabytes of data they have at their command to create those “hyper personalized, hyper localized experiences.”
The Regulation Conundrum
To that end, Smithingham is not a fan of regulating AI, believing that if we keep AI open source it will regulate itself.
“If it’s open source and everyone has access then I think human beings will want something that improves their lives rather than something that ultimately destroys humanity.”
In the end, it’s important to remember that “AI is not an entity. It’s just math, people… It’s vectors and fun ways to make those vectors actually do what you would like them to do,” says Jenkins.
“So remove the fear, get excited about it. And I suggest that artists are a part of the actual innovation and the development of the tool. Be a part of it. Use it, get excited about it. Create something cool.”
Led by industry pros, sessions also include in-depth looks at how to build your own mobile or home studio from the ground up;
Showcasing stories about New York, the winner of the “Empire State of Mind” photo competition will be announced, along with a presentation of the case study of the participants.
Through the lens of her latest collaboration with Fujifilm, director of photography Sarah Whelden will lead a pair of organic discussions about productivity, performance and morale.
NAB Show New York‘s newest show floor attraction connects photography and video and offers opportunities for professionals working in both disciplines to connect and learn from each other.
The Photo+Video Lab is designed for “all who leverage a hybrid mix of equipment to capture and produce content” and is also perfect for those who want to learn videography or photography.
This lab will offer workshops, Q&A sessions, photowalks, demos of Fujifilm and Adorama products, meetups and more:
Join Frederick Van Johnson, founder and editor-in-chief of the This Week in Photo (TWiP) podcast network, as he explores a contemporary approach to mobile content creation in this insightful session. Discover how he effortlessly crafts high-quality videos, engaging courses, and captivating podcasts using a laptop, tablet, or smartphone. It’s time to break free from the conventional home office setup!
In this session, you’ll gain insights into the essential hardware to support a mobile lifestyle, the best software options and how to utilize them, a comprehensive production workflow from inception to completion, strategies for publishing and sharing your content effectively, and much more. You can watch his video, “Making Content as a Mobile Creator,” here.
Join director of photography Sarah Whelden in an organic discussion about productivity, performance and morale. Through the lens of her latest collaboration with Fujifilm, and drawing on her decade of experience DPing narrative features, commercials and documentaries, Sarah discusses consensus, setting a vibe and navigating communication with everyone on set at various career stages. You can watch her introductory video here.
Step into the future of digital asset management with AI and discover how AI can elevate your video and photo game. In this immersive session, featuring XeroSpace XR/Web3 Producer Elena Piech, attendees will experience and explore how AI tools can be integrated into photo and video workflows. Piech will explain how artificial intelligence tools can streamline content organization, efficiently automate editing tasks and even enhance your creative focus.
You’ll explore the transformative capabilities of AI tools in streamlining and enhancing your photography and video workflows. From intelligently organizing and tagging vast image libraries to automating time-consuming editing tasks, AI empowers you to focus on what you love most — capturing breathtaking moments.
Do you want to set up a home studio but are overwhelmed by the complexity of it all? In this concise presentation, Frederick Van Johnson, founder and editor-in-chief of the This Week in Photo (TWiP) podcast network, will break it down for you and provide a simple, step-by-step approach to get your home studio up and running in no time.
Gain a comprehensive overview that covers all aspects of setting up a home studio, from hardware and software to techniques, scheduling, and more. Whether you’re a budding content creator, musician, podcaster, or someone interested in recording high-quality audio and video, this presentation will give you a practical blueprint to set up your studio quickly and efficiently.
“Empire State of Mind presented by Bamboo” is seeking the next great photographer and their creative collaborators to shoot an exclusive creator merch drop. The twist is, the creator will be taking to the streets of NYC for an epic fan-meets-creator photoshoot. On Bamboo, creators will build a collaborative feed with their team to showcase their artistic style.
They can include their chosen collaborators to present their creative, photographic approach to fashion on the streets of New York. The winner will get to lead the merch shoot for the renowned creator, Avori Henderson, and be tagged in the final post by Avori. The exciting journey will be documented and shared on the Bamboo platform and social media. The winner of the contest will receive a cash prize of $4,000 to help fund their creative journey. Learn more about the contest here.
The “Empire State of Mind: Photo Contest Finale” session at NAB Show New York will demonstrate new storytelling techniques, allowing attendees to discover innovative ways to blend visual mediums. The session will also cover social media tools, providing concrete practices for leveraging photos and videos for social media channels, as well as strategies for content monetization and winning methods for crowd-sourcing photo/video projects.
Reprising her session from Wednesday, October 25, director of photography Sarah Whelden leads an organic discussion about productivity, performance and morale. Through the lens of her latest collaboration with Fujifilm, and drawing on her decade of experience DPing narrative features, commercials and documentaries, Sarah discusses consensus, setting a vibe and navigating communication with everyone on set at various career stages.
This new and exclusive-to-NAB Show New York attraction is for all who leverage a hybrid mix of equipment to capture and produce content. Dive into a full-on immersion into the photo and video world, this integrated workflow experience will allow you to sample the latest tech and gear side-by-side from iconic brands and innovative newcomers.
From “Reflections,” a short film by Fujifilm photowalk guide Yolanda Hoskey
Interested in exploring Hudson Yards with a camera, lighting, a model, and a guide? Sign up for the Fujifilm Photowalks at NAB Show New York.
October 2, 2023
How Yolanda Hoskey Visually Catalogs the Black Experience
TL;DR
Yolanda Hoskey is a New York City-based photographer who specializes in portraiture and street photography. Prior to working as a still photographer, she worked in theater and film.
She is dedicated to cataloging the diversity of the Black experience. Fujifilm’s color science and Capture One editing software both help her to captureher vision.
Hoskey will be one of the guides for the Fujifilm Photowalks at NAB Show New York. Register to attend here (use the code AMP05 for free admission) and then sign up for your preferred time: October 25 at 10:30 a.m., 1 p.m., or 3:30 p.m. and October 26 at 10:30 a.m., 1 p.m., and 3:30 p.m.
“Why I found the camera, it’s a little bit layered,” says New York City portrait photographer Yolanda Hoskey.
“I’ve been in the creative industry for the last 10 years,” Hoskey says. But the majority of her career has not been working in still photography.
Prior to working as a full-time photographer, Hoskey worked in both theater and film for seven years; first she was a stage manager, theater set designer, costume designer, director, and then transitioned to film, where she worked as a production designer and creative producer.
“Because of my experience in the theater industry, the film industry, I’m very much drawn to storytelling,” Hoskey says.
Additionally, the seeds for her photography career were actually sown in her first year of college, when her mother passed away.
“I was thinking about just trying to remember her, her legacy, our time together, and I realized I only had the same six pictures, and one video of her because we didn’t … document our existence,” Hoskey recalls.
That experience shapes her work today. “I want to be able to give that back to the community that I identify with and create, kind of, this catalog and this collection of memories of real people, to say that they were here, they mattered and that they were loved, as an ode to my mom.”
Hoskey understands that not everyone is drawn to her community of origin, which she describes as “the projects in East New York,” so she says, “part of the work that I do as a photographer, is trying to kind of debunk and de-stigmatize and create new representations of the Black experience as nuanced, as non-monolithic, as multidimensional. And so I’m trying to create this catalog of this vast representation of the Black identities that are different from mine.”
YOUTUBE UNIVERSITY
Because Hoskey did not go to film school, she says, “I definitely am a graduate of YouTube University. I am always one pulling on my community” to level up her game.
“I just YouTube questions and [watch] videos that answer my questions, and I don’t really go to one person’s specific channel,” she explains.
One of her favorite creators on social media is Adrian Per, also known on social media as @OMGAdrian. Hoskey says, “He is so helpful. His videos are so engaging. And he’s talking about video making, adding coloring, how do you [do] sound and he’s like telling you exactly how he does the work that he does. … I[‘m] always saving them. Oh, I’m gonna try this later.”
In addition to her peers, social media and YouTube, Hoskey says she’s gained knowledge from “an online learning platform called Domestika,” which she says is relatively affordable and offers a wide variety of classes. “
Whatever your industry is, there’s a course for that. And there’s someone who is doing the style of photography that you’re doing. And they’re literally giving you, like, this is what I do, this is why I do it. These are the tools I use, and then they encourage you to go and practice.”
HOSKEY’S KIT
Hoskey may have started shooting on her iPhone, but she knew that wasn’t her end game.
“I feel my photography is very personal. And I applied that same way of thinking to when I bought my official camera,” the Fujifilm X-T3, she explains. “A lot of my early work is on the X-T3, X-T4.”
Hoskey has stayed loyal to Fujifilm’s X line. “About last year, I switched to the Fujifilm XH2, and that has been my go to camera for editorial work, for in-studio work.” She chooses this camera, she says because she feels like “that’s the clean, the polished, the sharp, the vibrant colors… That’s more of my polished, professional work. ”
For a different vibe, she say, “I’ll use my X-T5, if I just want to go out and take street photography photos or just, you know, just capture everyday life. …those photos feel like the most film out of the collection of cameras that I have…on the X series. They all feel like film photos, those nostalgic film photos.”
“And then recently,” she says, “I’ve actually tried full frame cameras, and I’m kind of obsessed. You might see a lot of work from me using full frame cameras.”
Hoskey also shares her “trusty three lenses that I will never leave the house without” because she says they offer “the most range with the type of portrait photography I do.”
“So the first is my 56 millimeter. It’s a great portrait lens. The level of detail it gives me, and it’s just super sharp. I love that lens.” So much so that she has two versions of the same lens! She explains, “I got the newer version, and it’s even better than the original one that I had.”
Also, “instead of a 35 millimeter I’ve been using a 33 millimeter. It’s a slight difference, but that is my favorite lens of all time that I have. That is the lens that I always shoot with. It is always in my bag.”
“And then for a little razzle dazzle,” she says, “I to add a 80 millimeter, because I getting super wide shots, but I love using wide lenses for close shots, just because of how it elongates the body creatively.”
Lighting accessories are also important to Hoskey.
“Because I started as a natural light photographer. I always have a reflector in my bag,” Hoskey says. She is especially enamored with the silver side of the reflector to get the desired effect.
“I think I use continuous lighting because I came from film and theater,” she posits.
EDITING
Hoskey uses Capture One and Photoshop to process her images. She says she loves to select her photos, but “it becomes grueling when I get to the editing process. Because I am not a person who edits all my photos the same way.”
“My editing is purely based off the vibe” of the image, she says. And that means there’s “no short way to do it.”
Processed with VSCO with j2 preset
Processed with VSCO with al6 preset
Processed with VSCO with j5 preset
Processed with VSCO with ss2 preset
She says she primarily relies on Capture One (as opposed to Lightroom) because of one feature in particular: it enables users to easily “isolate editing the skin tone from editing the entire picture.”
She also compliments Fujifilm’s color science and says “Capture One just amplifies it.”
BEST PRACTICES AND ADVICE
If you’re not ready to invest in certain tools, Hoskey has a few solutions that might come in handy.
“If you don’t have a reflector or can’t afford to buy a reflector, you can use aluminum foil, and it’ll have the same effect,” Hoskey says. “Or if you don’t have a bounce… I use tissue paper, or you can use white poster board; that’s 50 cents.”
She offers a third hack: “I tried one wildcard in place of the gels for the light. I used these colored binder dividers. They were plastic dividers, and if you put them in front of the light, it works the same as using the gel.”
A portraiture specific tip from Hoskey is that moisturized skin is crucial. For a quick fix, she says “olive oil [or] hairspray [can be used in a pinch] to make the skin look lustrous. ”
FUJIFILM PHOTOWALKS
If you’d like to learn more from Hoskey in person, NAB Show New York offers the perfect opportunity: She will serve as one of the guides leading the Fujifilm Photowalks around Hudson Yards, October 25-26.
Hoskey says attendees should expect a “good time, a lot of fun” and to jump into portraiture because there will be models available to pose during the walk.
“Definitely utilize the resources of the people who are there to support this photowalk, which is myself and other photographers and the [Fujifilm] techs,” Hoskey recommends, “to overall make yourself have the best experience.
The Fujifilm technical experts will be on site, Hoskey says, to “tell you anything you need to know about the camera that you’re using,” which will either be the X-H2 or X-H2S or the brand new GFX100 II. “They’ll be there to let you know what the settings are like, if you’re confused about what burst mode is, or what film simulations to use.”
“And if you have questions creatively, the photographer who is leading the photowalk will help you,” Hoskey says.
She explains, “I want to answer any questions people may have on what disconnects? What learning curves are they hitting? What walls?”
Hoskey would also like to emphasize composition because “when I’ve done photo ops in the past, what I’ve noticed is people are scared to move or engage or use angle high, get low, and kind of really compose their shots.”
In addition to composition, she expects attendees “to play with lighting. I started with natural light, and I want to show people the different ways in which you can shape natural light.”
“The goal is for people to walk away with images that they’re proud of, and a new skill they can apply to their photography of the future,” she says.
Interested in exploring Hudson Yards with a camera, lighting, a model, and a guide? Sign up for the Fujifilm Photowalks at NAB Show New York.
October 2, 2023
“The Creator:” Making a Sci-Fi Epic Like an Independent Film
TL;DR
The Creatorwas made for $80 million —and looks like $300 million — and Hollywood is astonished.
Actually shooting on location proved far more cost efficient than studio builds and volume work, not least because of writer-director Gareth Edwards’ savvy with creating VFX of realism and scale.
The film is IMAX and Super X screen-certified, yet was shot almost entirely on a $4,000 prosumer camera, giving the lie to the idea that blockbusters need have blockbusting budgets and the highest end gear.
In modern film economics, the entire $80-90 million budget of The Creator is usually allocated to marketing a sci-fi blockbuster the scale of Avatar or Star Wars — and Hollywood is marveling at how director Gareth Edwards got away with it.
The Creator looks like it was made for double or quadruple the price and would have done had conventional methods been applied. Instead, Edwards reverse-engineered the process by filming with a relatively small kit and crew in multiple locations, locking off the edit, and only then calling in VFX.
“They said there’s no way we can really do this because… you can’t find these locations and build sets in a studio against green screen, and it’ll cost a fortune,” the British writer-director explained to AV Club and commented on by John Owens at Frame Voyager.
“Instead, we went to real parts of the world that look closest to what we wanted, then afterwards when the film’s fully edited, we get production designer James Klein and other concept and VFX artists to paint over those frames and put the sci-fi on top.”
Edwards spent a decade at the start of his career, “doing computer graphics very cheaply in my bedroom,” Edwards explained in the same video breakdown. “I tried to learn a lot of tricks as to how to make things look bigger than they are with very little effort. Like one of the things that’s free [to build] is scale. You learn that something’s only big when it’s relative to something else. The key is always having this something else in the frame.”
With Todd Gilchrist at Variety, he adde, “Essentially, if you make sure everything in the immediate 10 or 20 meters [of the frame] is for real and that the stuff that you’re going to invent is in the distance,” he says. “The way parallax works, the brain can’t tell motion beyond about 20 meters. It’s like putting digital matte paintings behind your foreground. That’s a really good bang for your buck.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
The shooting process could really only be called guerilla filmmaking, with Edwards taking cues from his original low budget feature Monsters.
One of the most indie methods he opted for was a rejection of how VFX is normally approached in the industry today, which is to use green screen and volume stages and to push iterations of VFX up to minute before release.
What was unorthodox was the locations did much of the visual heavy lifting with environments, lighting, architecture and geology for the most part looking just as they were filmed. Only after the film was edited and locked was VFX allowed to go to work.
“There was a little bit of volume work at Pinewood but very low,” Edwards admitted to Owens. “And if you do the maths, if you keep the crew small enough, the theory was that with the cost of building a set which is typically like $200 grand — you can fly everyone anywhere in the world for that kind of money.”
Numerous VFX houses were contracted to work on The Creator, including ILM, Folks VFX, Outpost VFX, VFX Los Angeles, and Misc Studios, among others.
“We’re not saying that there wasn’t work done with the backgrounds, but VFX are not having to be fully digitally recreate every scene. This allowed more effort to be put into making the robots of this world feel even more real [because] locations, props and characters are already in the shot. Often there was no additional work or relatively minimal labor needed in finalizing a character or environment.”
According to Edwards, this approach saved them tens of millions of dollars and stands in contrast to the way CGI and VFX heavy productions are made.
To illustrate the indie nature of the shoot, Edwards says a scene near the beginning when Joshua (John David Washington) was running down the beach under crossfire was shot in a location still open to the public.
“We didn’t close that beach. If you look at our feet in the background, you can see bars and tourists. One person came over, and was like ‘What are you doing?’ because it was just the four of us with a camera running around and it didn’t look like this big massive movie.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
From “The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
Concept Cinematography
The Creator’s original cinematographer was Greig Fraser (The Batman), before he was lured away by the mega-budget Dune (for which he wound up winning the Oscar).
Before launching the production to shoot on location across the globe, Edwards filmed a proof-of-concept reel, in which he captured actors and extras in real environments without preparing them in advance to accommodate digital replacement technology.
“We wanted to do it not in a traditional system, and it was really important that we stuck with that, because sometimes in filmmaking you make small compromises, and before you know it, your small compromises become big compromises,” Fraser said in Variety. “So we worked for a number of years just doing testing and talking and looking at the way we could actually achieve this.”
He elaborated on working the concept through with Edwards in The Playlist: “We try to keep the camera-facing crew as small as possible, allowing for those resources to be put into areas that need them later on, like post-production or VFX.
A lesson I’m hoping everybody who’s reading this will learn is this: it’s possible to completely turn the filmmaking process on its head, and when we do, there are massive cost savings to be done, but also quality improvements.
“Films work because every person knows the parameters of their job, and that’s why you don’t have people stumbling over each other. It’s an established relationship; it’s efficient from a personnel standpoint, but unfortunately, with those efficiencies also come inefficiencies that we’ve hopefully turned on their heads with this movie.”
Unusual Camera Choice
Equally as unusual was their choice of a prosumer mirrorless camera, the Sony FX3.
“It’s a camera you can buy at Best Buy,” Edwards says. “It looks like film. It’s full frame, full IMAX resolution [for certification on IMAX screens], and has filmic photographic quality to it.”
Gray Kotzé at IndepthCine went deeper on the camera choice, explaining that the difference between the $3,900 Sony FX3, and a $75K Alexa Mini is remarkably small.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
The FX3 can record in 4K resolution, while the Mini LF can do 4.5K. In terms of dynamic range, Sony reports 15+ stops, while ARRI claims 14+ stops. When it comes to bit depth, the FX3 shoots 10-Bit 4:2:2 internally in SLog, whereas the Arri can shoot 12-Bit 4444 XQ in Log-C.
“While the ARRI does outperform the Sony visually, especially in the color department, the point remains that the gap between them is pretty slim, when comparing a prosumer and a professional camera and seems to be closing more and more every year.”
Does this spell the end of the Alexa forever and mean that all future Hollywood productions will use the FX3? Well, no, probably not.
Kotzé thinks the workflow and philosophy of using high end camera gear is too ingrained.
“The entire industry is set up around working with high end production cameras and I don’t think that this will change any time soon,” he says.
“Studios know what they will get when shooting with an Alexa, and producers are used to budgeting for gear in terms of an Alexa rental fee,” he says.
However, what we may see is that features from prosumer cameras — such as its high ISO base and smaller form factor — filter into the higher end cameras. And that this prosumer gear will increasingly be adopted across lower budget projects.
Questions Over One VFX Shortcut
Shortly after the trailer released allegations emerged, that the filmmakers had used footage of a real life explosion in Beirut 2020 as the basis for a shot when a nuclear warhead detonates on LA.
It’s not unusual to use original source material, stock footage for example, in the VFX pipeline but it’s not clear if this instance — which killed hundreds of people — was used in oversight.
“While The Creator’s team favored more practicality and realism over excessive the effects, this is one instance of realism that they went a little too far in using footage of a real life tragedy, especially a recent one for a purely fictional movie. It seems a little bit out of touch, to say the least and it proves that we may need more discussion when it comes to this side of digital effects in the future.”
Edwards doesn’t appear to have addressed this anywhere, but Owens does and condemn its use.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
Director Gareth Edwards considers the debates around AI, reflected in his film “The Creator:” “Should we embrace it or should we destroy it?”
October 30, 2023
Posted
October 1, 2023
How These YouTube Creators Directed the Breakout Horror Feature “Talk To Me”
From “Talk To Me,” courtesy of A24
TL;DR
After years of sensation and spectacle with RackaRacka and around 1.5 billion views, YouTube creators Michael and Danny Philippou made breakout horror film “Talk to Me.”
They shot the film in 26 shooting days using all the DIY skills they had learned producing spoof movie shorts for streaming.
They talk about their process as does cinematographer Aaron McLisky who are already working on a sequel and a prequel.
“Our end game was never YouTube,” says Danny Philippou. “We didn’t want to be YouTubers; we always wanted to be filmmakers and to make cinematic experiences that didn’t have to rely on YouTube algorithms.”
Philippou and his twin brother Michael are seriously successful YouTube creators whose channel RackaRacka has racked up 6.6 million subscribers.
Earlier this year they released their debut feature. “Talk to Me,” an Australian independent supernatural scarer made for $4.5m has become A24’s highest grossing horror movie in North America.
“YouTube was just a way to create and get exposure,” adds Danny Philippou. “We really got sucked into the vortex YouTube for years.”
The movie, which has grossed $70m at the box office worldwide, is about a group of friends who discover how to become possessed by spirits with an embalmed hand, creating a thrilling party game.
The main character, Mia, has recently lost her mother, and her grief makes the idea of finding her mom on the other side both compelling and dangerous. But soon, the supernatural forces can’t be controlled any longer.
Directors Danny and Michael had no prior feature film experience, but are self-taught DIY filmmakers behind a series of horror/comedy YouTube shorts that have been seen over a billion times — all completely shot and edited by the duo.
Initially, “Talk to Me,” was scheduled to be an eight-week production but when the first-time feature filmmakers opted for promising young Australian talents over proven stars, the budget shrank and the shoot was reduced to just five weeks.
In one big montage scene, a group of teens take turns clasping a magical embalmed hand, which in turn makes them possessed by the dead. The Philippous wanted to shoot with the quick-cut, laughing gas energy of a drug trip, writes Max Cea who interviewed them for Esquire, but they didn’t have time to get all the shots they wanted.
“We wanted 50 set-ups and the first AD said, ‘It is mathematically impossible to get all these shots,’” says Michael Philippou.
From “Talk To Me,” courtesy of A24
From “Talk To Me,” courtesy of A24
From “Talk To Me,” courtesy of A24
From “Talk To Me,” courtesy of A24
From “Talk To Me,” courtesy of A24
From “Talk To Me,” courtesy of A24
“We were like, ‘We need to shoot this Racka style,’” says his brother.
Picking up the story in The Editing Podcast, Danny Philippou recalls, “As long as we control the set for these two hours, we had two cameras, a boombox playing music and riffing with the actors going through these shots. [It was] a really extreme chaotic energy [with] the camera just flying around.
And in the film’s production notes, he recounts of the same scene, “It was us hiding behind a couch, screaming directions, two cameras flying everywhere. It was so much fun. I feel like the film captures the energy that was behind the scenes, as well.”
It was a sequence that took all of the brothers’ training to pull off —an ease with run-and-gun tactics from their streaming careers, along with the exposure to traditional production that kept them grounded in a more structured environment.
Cinematography
One key to that more conventional structure was working with cinematographer Aaron McLisky. Although McLisky has gone on to shoot action thriller “Poker Face” for director Russell Crowe and seasons of TV drama “Mr. Inbetween,” at the point the brothers got in touch with him he had only made a short film called “Nursery Rhymes.”
“What was fascinating about them is they’re sort of these grassroots self-taught filmmakers in their own right, telling these really ambitious stories,” he tells Cinepod. “They shot everything themselves, they edited everything themselves, and they went to such extremes, that’s obviously what caught the world’s attention.”
He goes on to say that the duo had strong ideas about how they wanted “Talk To Me” to be cinematic: “In some ways, they wanted to prove to the world that they are serious filmmakers. To me, that was quite motivating that they had this intention to move away from YouTube-style content, but we always talked about the influence of RackaRacka on this movie, and when it was appropriate, and when it wasn’t.”
Cinepod recounts that McLisky “kept scenes lit with practical lighting and green fluorescents as much as possible, making Mia seem sickly and possessed. During the possession scenes, Aaron chose to contrast the sequences with unmotivated lighting, and as Mia’s psychological decay progresses, the film subtly becomes darker and more desaturated in the grade.”
Additionally, McLisky knew that it was crucial to “to be sure that the camera work elevated the tone of the horror movie, by showing or withholding information as needed.” For McLisky, “every frame and every camera movement speaks to a world that’s truthful to the characters.”
With Danny Philippou elaborates on the filmmakers’ ambition, “There’s this weird stigma around being a YouTuber that you’ve got from the media. Even when spoke to [“Mad Max” director] George Miller he said, if he had the platform of YouTube when he began he’d be uploading to that, because it’s a way to get seen internationally straightaway. There are talented YouTubers that want to be filmmakers that I think definitely can make that crossover.”
What’s Next?
The Philippou’s credit Causeway Films’ producer Samantha Jennings for helping them find the backing to get their feature made. The film’s runaway success has naturally made them go-to talent for more.
“We’ve been offered so much stuff, we’ve been lucky enough to get into all these rooms,” says Michael Philippou interviewed by Perri Nemiroff at SXSW. “We’re just gonna go put everything on the table and decide.”
A24 has already announced a sequel is in development with the Philippou’s return as co-directors. They have also completed principal photography on a prequel film.
“I think we kind of have a cheat code,” Michael says of working with his twin. “Because there’s so much responsibility, being able to like, share out the load a little bit and also having someone who has the exact same day and the exact same overall vision. It’s like, oh, man, we’re both just crazy together.”
They admit that their outward bravado may not be all it seems.
“In the daytime, I was so confident about it,” says Danny to Esquire, “but at night, all those doubts start to creep in, and you’re like, I don’t know what the fuck I’m doing. I’ve never made a movie before. I can’t believe there’s millions of dollars on the line. I’m just questioning and overthinking it. Then when the day comes, you just have to take the leap.”
Held Oct. 26, the conference is for photographers and online video creators who want to build their businesses and expand their skillsets.
October 30, 2023
Posted
October 1, 2023
Virtual Production: It’s a Real-Time, In-Camera Love Affair
TL;DR
Virtual production has evolved from a proof-of-concept technology to an industry standard, revolutionizing film and TV production.
NAB Show New York will feature a panel discussion, “The Virtual Production Revolution — A Real-Time Love Affair,” led by Jim Rider, Virtual Production Supervisor at Pier59Studio, with KéexFrame founder & CEO Arturo Brena, VP Toolkit founder & CEO Ian Fursa, and ASHER XR founder & CEO Christina Lee Storm.
Core technologies like camera tracking, LED walls, and Unreal Engine have matured, making virtual production a reliable tool for filmmakers.
The technology is not just for high-end productions; it’s becoming more accessible and is also being employed in various industries from fashion to automotive.
The future of virtual production looks promising, with advancements in AI and other technologies set to further integrate and standardize workflows.
Virtual production has rapidly evolved from a nascent technology into a mainstay that’s fundamentally altering the film and television production landscape. But VP isn’t just about the use of LED walls and securing a really great stage; it’s an ever-evolving set of technology solutions and practices.
Leading the panel is Jim Rider, virtual production supervisor at Pier59Studio, whose years of hands-on experience make him an authority on core VP technologies. He will be joined by KéexFrame founder & CEO Arturo Brena, who previously won an Emmy Award for innovative live-camera tracking and real-time graphics, and has helped conceptualize and lead projects that define the new cutting edge in graphics on-air technology. Rounding out the lineup of industry pros is Ian Fursa, developer and CEO of the VP Toolkit plugin for Unreal Engine and educational workflow series at educational institutions around the world, and ASHER XR founder & CEO Christina Lee Storm, who specializes in the creative development of emerging technologies for linear and multi-platform storytelling.
L-R: Jim Rider, Virtual Production Supervisor, Pier59Studio; Arturo Brena, Founder & CEO, KéexFrame; Christina Lee Storm, Founder & CEO, ASHER XR; and Ian Fursa, Founder & CEO, VP Toolkit.
Ahead of NAB Show New York, the panelists provided a taste of what promises to be a dynamic and insightful session in an exclusive Q&A for NAB Amplify. Watch the full discussion in the video at the top of the page.
A Rapid Evolution
Until recently, virtual production was largely thought to be still in its experimental phase, intriguing but not yet fully integrated into mainstream workflows. “A year ago, two years ago, this was a proof-of-concept technology,” Fursa recalls. “A lot of people didn’t really have a grasp on the technical side of things yet, or how to quite integrate it into their already existing workflows.”
Fast-forward to today, and the landscape has dramatically changed. Rider emphasizes that the core virtual production technologies — camera tracking, LED walls, and real-time graphics engines like Unreal Engine — have matured to the point where they are reliable production tools. “Most of it has been figured out in terms of [being] a production tool.” Alongside color workflows, he says, “the biggest challenge these days is adoption.”
Virtual production, says Rider, has “gone from a, you know, Mandalorian, Disney multimillion-dollar workflow to something that you’re seeing in a lot more stages and a lot more productions that are able to tap into it as a production technique.”
“The genie’s out of the bottle,” says Storm, noting that VP workflows are more accessible than ever. Currently she’s seeing virtual production employed primarily for pre-production, “whether that’d be real-time previs, cinematics, or real-time animation,” although she expects that to open up even more, “and that’s pretty exciting.”
But it’s not just about the shiny new tools; it’s also about how they fit into existing pipelines. “Companies are trying to really leverage real-time pipelines,” she says.
The shift from experimental to essential has been rapid, but there is still much ground to cover, as Fursa notes. “All of the different industries have now started building their own pipelines to utilize this technology. So now we learn how to tell stories with it better, we learn how to refine how those stories are told and what tools are built,” he says. The current push for standardization is another sign that virtual production is maturing. “Now is the time where everybody’s building things and creating what everybody wants: an industry standard.”
Advances in Image-Based Lighting and LED Technology
When it comes to the tech that’s currently driving the virtual production revolution, Fursa and Rider are particularly excited about advancements in image-based lighting and LED technology.
Image-based lighting “really goes hand in hand with virtual production,” Rider explains. “One of the things we always tout is that we can put up multiple environments on the wall, multiple locations in one day. But the ability to actually relate those by doing pixel mapping onto full-spectrum LED light fixtures on set — not only are we changing the contents on the LED wall, but we’re changing the lighting at the same time because that lighting is being mapped to these LED fixtures.”
Other advancements in LED technology include physical chips designed to enhance performance and image quality. “Before [now] LED walls did not render accurate lighting for talent and for real props,” which became a real problem during filming, says Fursa. “It makes it so that we can’t have an explosion that is the accurate color on our subjects. So there’s been a lot of advancements in the development of the physical chips and technology for us to be able to start using LED walls as an actual film production light source.”
What’s Next for Virtual Production?
In the near future, Fursa predicts, virtual production will be employed not just by “high-end series and film productions, but smaller-budget commercial and music videos.”
And as workflows begin to integrate with artificial intelligence, toolsets will become “very powerful,” he says, allowing things like background replacements to become routine automated tasks. Studios, he says, will continue to focus on refinement and standardization in the near-term. Yet “there’s still a lot of benefit” to separating production from in-camera VFX, “because there’s physics there that we still haven’t figured out how to emulate,” he adds.
Brena also sees “a massive grand-scale adoption from the entertainment industry” in the short term, leading to extensive pipeline and workflow modifications and even new ways of storytelling.
“So either from interactive storytelling, from broadcasts, for example, the idea of having things that have been, right now, just eye-candy, they can now be editorial, right? Because you can change our concept of the content in real time.”
As virtual production technologies become more sophisticated, they’re also opening up new avenues for monetization. Looking further out, Brena anticipates that in-device rendering, which will allow consumers to render content on their local devices, “will allow for new ways of not only storytelling, but also new multiplying streams of revenue for advertisers,” he says, a development that’s “going to be very attractive for advertisers.”
Virtual production’s influence isn’t limited to Media & Entertainment, and as VP technologies continue to evolve, their application across various industries will likely broaden, creating new opportunities and challenges. “This is a cross-industry effort,” says Brena. “We see people from fashion, automotive, film, and broadcast collaborating together with these new technologies.”
Generative AI adds a new layer of potential efficiencies for collaboration, Storm adds. “With generative AI, creators are able to see, at least visually, their concepts and ideas,” she says. “You could do that in a quick iteration that cuts out some of that process… that gives a little bit more time in the front end to just work through the kinks, work through, like, ‘Are we on the on the same page creatively?’”
The production industry will continue to combine technologies like GenAI or even volumetric motion capture with virtual production, she says. “More and more, every year, we’re going to see that evolution as well as the new iterations on workflows.”
Explore NAB Show New York’s newest show floor attraction, the Photo+Video Lab! A dedicated space for the converging worlds of photography and video, the Photo+Video Lab offers opportunities for professionals working in both disciplines to connect and learn from each other.
Explore Hudson Yards with a camera, lighting equipment, a model, and a pro guide during the Fujifilm Photowalks, showcasing the latest gear paired with an integrated workflow experience from Frame.io. Learn how to set up a streamlined and efficient mobile or home studio with This Week in Photo podcast network founder and editor-in-chief Frederick Van Johnson. Cultivate community and collaboration in a series of candid conversations with director of photography Sarah Whelden. Learn how AI can elevate your video and photo game in an immersive session featuring XeroSpace XR/Web3 producer Elena Piech. Discover new storytelling techniques leveraging social media platforms at the Empire State of Mind: Photo Contest Finale.
Told in real time, Hijack is the Apple TV+ thriller starring Idris Elba that follows a hijacked plane as it makes its way to London over a seven-hour flight while authorities on the ground scramble for answers.
With so much of the action taking place in midair the production made extensive use of virtual production stages and techniques. The show could have been shot on blue/green screen, though this would have necessitated far more VFX and would not have given the actors the experience of “seeing” the film’s environment during their performance.
In addition to which, director and co-creator Jim Field Smith was keen to achieve as much in-camera as possible, as production VFX supervisor Steve Begg explained to Vincent Frei at Art of VFX.
“He hates, as I do, the giveaway camera positions and moves that signpost the unreality of a lot of CG shots, no matter how beautifully they are lit and rendered. For example, shots like flying up to a jet and passing through the window into the cockpit. We tried as much as possible to make the shots look feasible in the real world. We never have shots just outside the aircraft looking in, for example.”
Sequences featuring Eurofighter jets never have the cameras outside their canopies when we see the fighter pilots. All are shot with locked-off cameras on wide lenses within their cockpit space. All other shots of those fighter jets are on long lenses for POVs or wide in non-subjective shots.
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Jude Cudjoe and Christine Adams in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Max Beesley in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Hattie Morahan in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Jeremy Ang Jones in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Kate Phillips in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Jeremy Ang Jones in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Kaisa Hammarlund in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Director of photography Ed Moore elaborated on the need for authenticity in a case study posted to the Lux Machina website. “Almost everyone has been on a plane, so if something doesn’t feel real to the viewer, they’ll immediately be taken out of the story,” he said. “There are visual cues we all recognise when we’re on a flight, like the sun’s beams coming in and hitting your TV screen, for example. Little things like that make the plane’s world feel real. It was important for me to immerse the audience in it as much as possible, so when the hijack occurs, they feel like they are in this pressure cooker with the passengers.”
The series was shot on four Volume stages in the UK, all operated by Lux Machina and featuring a combination of LED configurations. One stage was for the cockpit itself, situated on a fully automated gimbal that faced an LED wall. Another stage was used to film the air traffic control room and featured a large LED screen that played back the flight map in real time, mimicking a real air traffic facility.
The set piece was a full-scale 230-seat Airbus A330. LED screens were installed on tracks on either side of the plane, providing moving sky content and giving the actors the illusion that they were in flight.
To create this illusion, Lux’s Virtual Art Department (VAD) created assets containing an array of digital clouds hovering above land, water or desert.
“We were able to create and control the content that was put onto these LED screens, and put them on either side of the plane to really create an immersive space for the actors to be in,” Lux Machina producer Kyle Olson explained in a promo video.
Previs on the two main VFX sequences was completed by NVIZ. As this was not an overt VFX project (though still containing 900 shots), the main creative work, comprising the jet shots and crash and a handful of matte shots, were assigned to one major vendor, Cinesite UK.
The main approach to the imagery on the LED backgrounds were cloudscapes and landscapes generated using Unreal Engine and rendered as 12K Notch LC plates for playback.
One episode featured the plane flying from the Thames estuary in London, approaching Northolt and then landing at Denham aerodrome in a spectacular crash. All the cockpit views of London and the estuary were created using a stitched six-camera array shot from a helicopter that was sped up two times on playback.
“The plane crash was originally going to be a lot wilder and audacious than the one we ended up with, with the A330 crash landing onto a motorway through loads of recently abandoned cars,” Olsen reveals in the video. “Then there was a bit of a reality check figuring out this will not be believable and countering the reality factor we’ve built up in the rest of the show, ultimately switching the location to a crash landing at Northolt [on the outskirts of London]. Northolt is too short for an A330 landing, we were told, adding to the jeopardy.”
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Christine Adams in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Mohamed Elsandel in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Nasser Memarzia in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Eve Myles in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Neil Maskell in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Archie Panjabi in “Hijack,” now streaming on Apple TV+.
Eve Myles in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Nikkita Chadha in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
During pre-production, Moore had a four-screen flight simulator in his office, complete with cockpit controls. He played it for seven hours, tracking the same route the show’s plane was journeying to get a sense of the lighting.
“It gave him a lot of ideas for the type of imagery that he was interested in, and that imagery was provided to us a mood board, so to speak,” Lux CTO Kris Murray explained. “Our team could recreate portions of that, or take inspiration from them, to create customized versions of images that Ed could control.”
The system’s playback technology was at the core of the virtual shoot, allowing LED screens to display footage of cloudscapes, air traffic maps, and airport information monitors.
“We took the 3D workflow we used on large-scale productions like House of the Dragon and mashed them with the type of work we’d previously done using 2D plates,” said Murray. “That meant building a pipeline that allowed us to export content, in the same format as Unreal’s nDisplay, that could be played on a pre-rendered playback system.”
The VFX department shot master plates of the London skyline from 5:00 a.m. to 9:00 p.m., which provided a range of lighting options from sun-up to sundown, depending on what time of day the script called for. With that content on the LED wall, no post-production work was needed.
“This set piece could easily have been the most expensive location in the entire production — because prime real estate in London with views of Big Ben and the River Thames would cost you an arm and a leg to rent out,” said Spencer Chase, VP operator and technical director for Lux Machina.
Begg said that the scenes featuring the Eurofighter cockpit were most challenging.
Having elected to shoot them in the 270-degree volume on a motion base, they assumed they’d “get a nice ambient light wrap-around the cockpit and pilot” — which they did to an extent, but it reflected more than 300 degrees and you could just see over the edges of the volume.
“Being a TV show it was shot in a mad hurry (i.e., no testing time) and although everyone was initially quite happy with the results, after closer scrutiny we saw all sorts of issues that needed fixing. Lots!” Begg said.
“I’d anticipated we’d have problems with the reflections in the visors so we had them high-res scanned in order to get a good track and replace everything in them, with CG versions of the cockpit and the pilots arms and sky. The moment the reflections were sorted the shots really started to come together with added high-speed vapor on top of the Volume sky along with a high frequency shake. I stopped them doing in-camera as I had a feeling we’d be fixing these shots, big time. If we had they would have been a bigger nightmare to track.”
Get a preview of NAB Show New York with out exclusive Q&A with industry experts using virtual production as their go-to production method.
September 20, 2023
What Travel Can Teach You About Production
TL;DR
Travel is an excellent way to inspire your creativity and to accelerate your skills as a filmmaker, says Rubidium Wu. The unique challenges and constraints are extremely helpful for learning who you are as a creator.
Wu shares tips for getting the most out of content you create on “fly-by-night” shoots. Hint: Don’t just wing it and definitely don’t use a new equipment brand for the first time, if you can help it!
Wu also shares his best practices for shooting in NYC.
Does being well traveled make you a better filmmaker? Rubidium Wu of Canon Masterclass argues that it does — or at least that’s how it’s worked out for him.
Although much of his travel has been work-related and the tie-ins to career development are quite direct, the remote production lessons Wu’s learned have been accelerated by these trips and the unique conditions they created.
“I certainly wouldn’t have gone to as many places as I have without filmmaking as my profession, and I definitely wouldn’t be the filmmaker that I am today without all the incredible opportunities that I’ve had,” Wu says in a video for his YouTube channel, Crimson Engine.
In fact, Wu says, “I think making films in other places has taught me more about filmmaking than anything I ever did in my studio at home.”
Filmmaking has influenced his perspective on travel, and it’s also influenced how he travels, which in turn has changed how he lives at home. He explains, “I really love how a new city stimulates your perception of things, and how coming back to your home changes the way that you look at your everyday routine. Everything appears somehow new, offering fresh angles and untapped stories, waiting to be told.”
Additionally, Wu says, “As [travel] filmmakers, we are first and foremost observers. And we strive to condense an entire experience down to a series of images edited together, it underscores the responsibility and the opportunity that we have as storytellers.”
In more concrete terms, “the art of travel filmmaking is an exercise in intentional minimalism, each piece of equipment, every lens, every microphone has to earn its place because you’re going to have to carry it every step of the way. It’s about distilling your toolkit down to the essentials, ensuring each item serves a purpose.”
But rather than complain about this, Wi says he is “constantly looking for ways to further reduce what I have to bring with me. This constraint rather than being limiting, can actually be incredibly liberating. It pushes you to innovate, to think outside the box and to truly understand what’s at the core of the story that you’re trying to tell.”
Travel filmmaking also often means working with a skeleton crew or with people you’ve never met before.
Wu says, “Being on the road often means playing multiple roles, sometimes being the entire film crew, even the talent.”
Going solo can mean “that every frame somehow becomes personal, something new created and fought for. You can’t do elaborate setups and clever camera moves. You can’t distract from the essence of the story.”
Wu says this type of work “reminds me of the dogma movement of the 90s. It was a commitment to raw, unfiltered storytelling. Their idea was to make films without the crutch of big sets, special effects multiple locations, Steadicam ADR, and it really reinvigorated European filmmaking that had sort of lost its way trying to copy Hollywood. It really is a challenge, a call to focus on the human element, and the genuine human experiences that make films worth watching.”
Additionally, “Collaboration is another facet of travel filmmaking that offers many, many lessons,” Wu says. “Engaging with local partners and navigating the language barriers and adapting to their working styles pushes you out of your comfort zone and forces you to experiment with new processes, new equipment, new ways of doing what you’ve done.”
Travel Cinematography Best Practices
Practice at home. “If you’re doing something mission critical that you can only do once, that you’ve invest a lot of time and money, and you don’t want to leave it to chance, you want to have practiced that thing before,” Wu explains. “You want to have shot a scene similar to it in a similar location, if possible.”
Stick with what you know when possible. “Work with the same gear consistently from shoot to shoot,” Wu recommends. “If you jump from camera system to camera system, or lighting brand to lighting brand, everything works a little bit differently. And you are often going to get yourself stuck in places and having to discover problems that you didn’t know were going to happen on set. Or even worse, when you get back from set in the editing room and something looks nothing like you wanted to look.”
Embrace the challenge. “Another reason to do this sort of filmmaking is to find out for yourself, what kind of filmmaking you enjoy,” Wu says. “What your strengths are, what your weaknesses are, and how you want to challenge yourself. Filmmaking should never be a walk in the park. If it’s too easy, you’re not being ambitious enough.”
Headed to New York?
Wu has a set of tips and best practices just for working in The Big Apple in this installment of his YouTube series, Destination Film. Check them out before you pack your bags, and I guarantee you’ll arrive both inspired and better prepared.
Things to keep in mind:
Traveling with your gear across town will be time consuming and pricey. You’re not going to want to lug 70 pounds of expensive equipment up and down flights of stairs to the subway.
In fact, everything is going to cost a lot: food, hotels, travel. Budget accordingly.
You don’t need a permit to film on the street — if your crew consists of five or fewer people.
People won’t bat an eye when you’re shooting, *unless a really big celebrity is involved.
Spring for the hotel room in the city. You’ll want a central location and a place to power up your phone between takes.
Cinematographer/photographer Philip Grossman on how to reach, explore and capture imagery at some of the world’s most “impossible” places.
September 19, 2023
Making Content as a Mobile Creator
TL;DR
The mobile creator lifestyle isn’t just for influencers and Gen Z.
Specialization is all well and good, but it’s important to be adaptable and to identify the right medium to best tell a story.
Frederick Van Johnson thinks a mobile-friendly workflow can improve your craft while also making your content more relatable. Learn more at the NAB Show New York Photo+Video Lab session, “The Mobile Creator Lifestyle: Your Studio on the Go.”
Frederick Van Johnson is a photographer and podcaster who says that the main throughline of his work is content creation. He says he is now the kind of “storyteller where you kind of reach for the tools that best let you tell that particular story, whether it be a still photograph, or cinemagraph, or video, or audio or text or whatever.”
He also advocates for a version of work that is attainable for most people. And Van Johnson does so from the perspective that it will improve your work product, not just your lifestyle.
Watch our interview with Van Johnson broken into two installments (embedded below).
Van Johnson is probably best known for his This Week in Photo podcast, which he’s been making for more than a decade. It began as an audio-only venture, but Van Johnson says they added the video element when Google introduced Hangouts (RIP).
“Why not just do it live, kind of like those radio stations, where they have the camera in the studio?” Van Johnson remembers. “You know, like Howard Stern or whatever, they’ll have the camera in the studio, and you know that it’s being recorded for radio, but you get to be a fly on the wall in there.”
The subject of his podcast was an additional incentive; it’s easier to talk about photography when you can reference an image your audience can also see. It’s also easier as an interviewer to be “able to connect with someone visually, making eye contact, seeing those visual cues,” which he says then “helps you tailor what you’re going to say.”
Ultimately, Van Johnson says, “I think being able to connect that way with the person I’m interviewing, it just takes it to that next level.”
Listeners became viewers, and when Google sunset Hangouts, Van Johnson knew they couldn’t kill this element of TWiP. “That led us down the road of different tools, and things to try,” he says.
Tell the Story
Adaptability is a hallmark of success for modern creatives, and Van Johnson thinks experimentation helps keep him on the top of his game, or to put it another way, change gets his “creative river flowing.”
Van Johnson does understand the temptation to specialize, but he prefers to think of projects in terms of stories and to identify the correct tools to tell it accordingly.
Speaking of tools, he thinks it’s important to remember what your audience cares about. Hint: it’s not about the process. “Other photographers want to know how the sausage is made. The person that you’re creating the sausage for just wants to eat it, right? They just want to see the work.”
Put another way: “They just want you to tell the story. And if the story is provocative enough so that they get excited about looking at that thing, then who cares how it was made? Who cares how you get there? You made it, and you touch them in some way. Now they’re excited about your work.”
The Right Tool for the Right Job
As a creative working in multiple formats, Van Johnson understands the advantages and challenges of choosing not to specialize. But, he says, “In today’s world, especially considering where things seem to be going right now, it pays you to be that ‘stereotypical lifelong learner,’ where you’re always taking in new data and adjusting and kind of rolling with it.”
And some of that new data comes from reviewing tech specs. For example, cinematic HD became more accessible as camera form factors decreased and prices dropped. Van Johnson recalls an inspiring conversation with Vincent Laforet, who specializes in aerial photography and cinematography, that gave Van Johnson his light bulb moment.
“We are image makers, so yeah, of course, we can apply the knowledge that we’ve learned about light as a photographer, just at 30 frames a second now and tell stories narratively,” he says.
“Again, it’s the right tool for the right job. So still photography… is an unmatched art form. But there are other art forms that are another mechanism to share your creative voice that are popping up, like vertical video, like TikTok … folks do themselves a disservice by saying ‘no, I’m a still photographer; live video [is] for those video guys.’ Well, surprise, you are a video guy.”
And video itself is evolving. Augmented reality, virtual reality, and 360⁰ video are all increasingly accessible.
New Mobile Ways of Working
Van Johnson also thinks creatives should be open to new ways of working.
“We don’t have to be chained to these offices anymore. I think that’s what the mobile creator lifestyle is all about. It’s cutting that cord and moving the mindset from even the word home office, right?” Van Johnson says.
“I think the medium that we’re playing in doesn’t need to play by their rules. We can be anywhere,” Van Johnson says. He adds, “I think being mobile gives us the flexibility and the power to be more relatable.”
Perhaps the best part is “The tools are literally telling you as a creator, ‘you now have more power in your hand, go do something with it,’” Van Johnson says.
Specifically, he says, “With Apple and the M1 and M2 processors, software like Luma, fusion and DaVinci Resolve on the iPad for for editing. There’s a million audio tools, photography, tools, AI tools, all this stuff on these mobile devices, that when configured properly, with the proper peripherals, you can replicate what you have here in your home studio, but have the ability to just go pop off and go to you know, Mexico or something, find a strong internet connection and get to work and keep creating.”
Not everyone may be prepared to take that leap, but Van Johnson encourages us to find new ways to make our workflows more mobile friendly.
Step one, he says, is to reconsider cloud-based tools (if you haven’t already).
“Those softwares or service applications have in many ways, some in some cases surpass their, quote,’ real application’ counterparts, Van Johnson says. “So I would suggest that those folks start looking at it from that perspective… of what can I replace?… The more that you can get done in the browser, the more you are free from your desktop.”
If you’re already working in the cloud, he suggests, “If you have an established workflow, start incorporating next generation tools into that established workflow.”
By that, Van Johnson means, “try to get it done on a tablet and see where the holes are? See where the barriers are? … A limitation of the hardware and software or are they a limitation of, you know, your gray matter, right, which means you can overcome that and learn how to do things that way.”
After all, Van Johnson says, “You may be surprised. Some things will will likely feel easier” on mobile.
Douglas Spotted Eagle shares his expertise on what equipment has proven the most useful for remote and on-the-go productions.
September 13, 2023
“Original:” It’s a Dance Battle with the Sony BURANO
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
TL;DR
Director Unjoo Moon partnered with cinematographer Dion Beebe, ACS ASC to create a short film that tested the capabilities of the new Sony BURANO camera.
Moon opted to develop a dance film, “Original,” that would highlight the camera’s mobility and cinematic look, as well as its compatibility with the VENICE 2.
To test how the camera worked across a range of looks, Beebe devised a saturated world ranging from high key to a nearly noir-style black and white.
Director Unjoo Moon partnered with Oscar-winning cinematographer Dion Beebe, ACS, ASC to create a short film that tested the capabilities of the new Sony BURANO camera.
Moon opted to develop a dance film that would showcase the camera’s mobility and cinematic look, as well as its compatibility with the VENICE 2. Featuring an original score by Tushar Apte, Original is an exuberant, high-energy K-Pop style dance battle. “The whole spirit of this camera is about originality and about giving the creator freedom,” she explains.
“When [we] were talking about this concept, both in terms of what we wanted to sort of achieve artistically, and also what we thought would be a good way to push the cameras, we devised a very sort of saturated world — from quite high key all the way through to black and white — a sort of almost noir style,” Beebe recounts in a behind-the scenes look at the production, which can be viewed in the video below. “I was only interested in seeing the camera work across that range of looks.”
The BURANO camera, part of Sony’s CineAlta lineup of digital cinema cameras, is designed for single-camera operators and small crews. Combining exceptional image quality with high mobility, the compact and versatile camera features a hefty sensor that matches the VENICE 2. Like the VENICE series, the BURANO supports log recording as well as different color spaces including S-Gamut3 and S-Gamut3.Cine. It can reproduce the same color as all cameras in Sony’s Cinema Line, including the VENICE 2, which allows filmmakers to match cameras within the line.
“We were in a very stretched dynamic range purposefully and for me that was very much part of what I wanted to see both in the VENICE 2 and in the BURANO. Moving through the edit, you really were not aware that you were moving from the VENICE 2 sensor to the BURANO sensor back to the VENICE 2,” Beebe comments. “That compatibility, across the dynamic range, color interpretation and all of those things are important when I’m putting a package together and trying to complement a bigger sensor camera, like the VENICE 2. These two sensors, these two looks, really fall in line with one another.”
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
Making the short film “Original,” directed by Unjoo Moon and shot by Dion Beebe, ACS, ASC.
Boasting a powerful 8.6K full-frame sensor, the BURANO features an interchangeable E-mount and PL-mount lens to support built-in image stabilization in what Sony calls a first for digital cinema cameras. It can also record digital files from HD to 8K depending on the resolution, aspect ratio, and codec, and supports multiple internal recording formats.
The BURANO is equipped with an electronic variable ND filter enabling easy adjustments in various lighting conditions that allows you to get optimum exposure without changing the depth of field. With E-mount lenses for an increased flexibility, it also features adjustable pre-roll or cache recording ideal for unscripted filmmakers.
“The idea behind this test was really to take the VENICE 2 and the BURANO and see how they really work together,” Beebe said. “Because when I’m working on a movie, there’s always a requirement for a sort of smaller body that has that versatility, whether I’m doing handheld or, like we’re doing here, just be a little more freewheeling.”
The new camera comes with four new cinematic looks: Warm, Cool, Vintage, Teal, and Orange, in addition to supporting industry-standard LUTs. Also, like the VENICE series, the BURANO features gen-lock and can be used for virtual production using large screen LED displays. Learn more on the Sony website.
Getting ready to plan your journey at NAB Show New York? You won’t want to miss this opportunity to explore the synergies between live broadcast and cinema with the Cine+Live Lab!
This destination features a variety of educational sessions and production demonstrations centered on the trend of translating cinematic techniques to live broadcast productions. All sessions and demos are open to all-access badge holders, but off-site bonus programs may require prior registration.
Don’t-miss sessions include Color Accuracy: From On Set to Post, featuring colorist Warren Eagles in conversation with AbelCine Camera Technology Specialist Geoff Smith, and Hybrid Broadcast in the Real World, exploring use cases and projects involving a blended broadcast-cinema aesthetic and tools at top tech conferences in a conversation moderated by AbelCine director of rental Gabriel Mays, as well as a chance to check out the latest HBO Camera Assessment Series, and much more.
How “The Creator” Creator Gareth Edwards Is Thinking About AI
From “The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
TL;DR
New sci-fi feature “The Creator” is the latest project to use artificial intelligence as a backdrop, but all may not be as dystopic as it first appears.
Although the project has been in the works for years, director Gareth Edwards points out that releasing a film that so heavily involves debates around AI feels particularly prescient in 2023.
Edwards believes AI will fundamentally change filmmaking, allowing anyone to make big budget looking visions on a shoestring, but wonders where this will leave craft skills.
Edwards’ filmatic influences for “The Creator” span “Rain Man” to “Apocalypse Now” to “Baraka.”
“The Creator” – Gareth Edwards’ Vision | 20th Century Studios
The backdrop to new sci-fi film The Creator is AI, which can be benevolent and could be evil. One AI manifests as a very cute young kid.
“This film will challenge what you believe,” says actor John David Washington. “It’s hard to know whose side to be on.”
That’s exactly what director Gareth Edwards, who wrote the script with Chris Weitz, wanted. Although the future 50 years hence looks “as if someone made Apocalypse Now in the Blade Runner universe,” according to Ryan Scott at SlashFilm, Edwards doesn’t paint AI as black or white.
“Should we embrace it or should we destroy it?” he asks.
Releasing in cinemas on September 28, this is Edwards’ first film since Star Wars: Rogue One in 2016, around the same time he began writing the film. The starting point was to make an allegory about robots, “a fairy tale for people who are different, that look different from us, and that we treat as the kind of the enemy or the inferior, and that they do the same back to us,” he says in an interview with Joe Deckelmeier for Screen Rant.
He’s kept the robots but added AI so that the robots are sentient. It was only as they were shooting the film in the first half of 2022 that the latest wave of AI technology became front page news.
“I thought I was making a subject matter that was like three decades away. Like, there’s no way we’re going to witness this. And then whilst we were filming, people are sending me links to news items about whistleblowers in big tech saying that the AI was sentient. And it was like, Whoa, what’s going on?”
Artificial Intelligence in “The Creator” | 20th Century Studios
“As we’re making The Creator, AI is getting better and better,” Edwards says. “It feels like we’re at that tipping point now and this movie questions what does that look like 50 years from now, when AI is more embedded as part of society.”
Equally presciently perhaps, the film also depicts half of the world having developed AI and the other half being actively against it, following a catastrophic malfunction. Interestingly, it is the West that wants to ban AI while a region of Southeast Asia fights to keep it as a force for good.
Scenes in the film depict an anti-AI movement “with people with protest signs, for and against AI,” the director told Collider’s Perri Nemiroff, thinking this was absurd. “And now, I live very near the Studios [in LA] and we drove past and that’s exactly what’s happening [with the writer’s strike].”
The movie is set in 2070 but Edwards told Nemiroff he should have picked 2024. “But I picked 2070 Because I didn’t want to make the mistake Kubrick made of 2001: A Space Odyssey [which was made in 1968 but the Jupiter mission it depicted remains distant even now]. “So I was like, I’m gonna pick something way downstream.”
Making a Smaller Budget Go Further
Distributed by 20th Century Studios, the film itself was shot on a relative shoestring budget of $80 million, but looks like a blockbuster costing significantly more. For Edwards this seems a welcome retreat from the huge budget he handled for Godzilla in 2014 and back to the more DIY approach with which he made breakthrough sci-fi-horror Monsters in 2010.
Counterintuitively the secret was to ditch CGI and LED volumes (though both were used) to focus on shooting more in actual locations with world-building production design added after the fact.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“What you normally do is you have all this design work and people say, ‘You can’t find these locations,” he explained during a Q&A session hosted by IMAX, and reported by Slash Film’s Vanessa Armstrong.
“[They’d say] You’re going to have to build sets in a studio against greenscreen. It’ll cost a fortune.’ We were like, ‘What we want to do is go shoot the movie in real locations, in real parts of the world closest to what these images are. Then afterwards, when the film is fully edited, get the production designer, James Clyne, and other concept artists to paint over those frames and put the sci-fi on top.’”
So they did, and the crew went to 80 locations, which is far more than one would normally use for a movie of this size.
“We didn’t really use any green screen,” he said. “There was occasionally a little bit here and there, but very little. If you do the maths, if you keep the crew small enough, the theory was that the cost of building a set, which is typically like $200,000, you can fly everyone to anywhere in the world for that kind of money. So it was like, ‘Let’s keep the crew small and let’s go to these amazing locations.’”
“The Creator” | Official Trailer | 20th Century Studios
Edwards shot the film using Sony FX3 cameras, a budget choice but, as he points out, one that is barely distinguishable in performance from far more expensive so-called cinestyle cameras.
“The difference between the greatest digital cinema camera you can buy and a camera like the FX3 is minute, hardly anything,” he told Nemiroff.
The big advantage for the production was the camera’s ability to record in different light scenarios including the capability of shooting 12,800 ISO “so we can shoot under moonlight.”
That in turn enabled the production team to shoot with fewer lights, cutting costs and increasing mobility. The filmmakers developed a lightweight lighting rig that a crew member could move in seconds, rather than minutes, as Edwards explained to Armstrong.
“I could move and suddenly the lighting could re-adjust. And what normally would take 10 minutes to change was taking four seconds.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
This afforded room for the actors to improvise and for Edwards to capture more of a documentary feel. “We would do 25-minute takes where we would play out the scene three or four times and just give everything this atmosphere of naturalism that I really wanted to get, where it wasn’t so prescribed. You’re not putting marks on the ground and saying, ‘Stand there.’ It wasn’t that kind of movie.”
Edwards’ lighting operator would move with the camera, just as a boom operator would, he told Nemiroff.
“We’d do a little dance together in real time,” he said. “Normally [changing light setups] would take half an hour. So it just liberated us completely. And I’m never gonna go back, to be honest.”
He started the project with one of the world’s most in-demand cinematographers, Greig Fraser ASC, who won the Oscar for his work on Dune last year. As Edwards says tells Nemiroff, “Greig is one of the few people in the world I would trust to give a camera to and say, you shoot it, and just hand it over. He’s got an amazing eye. The whole world seems to know that now.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
Having done prep for the movie in 2019/2020, Fraser got the offer to shoot Dune and its sequel for Denis Villenueve, and suggested DP Oren Soffer take over behind the camera.
According to Edwards, Soffer was a protege of Fraser’s. “So I looked at his work, it was really strong. We chatted and I really liked him. And so basically, there’s this transition where Greig carried on remotely but Oren picks up the reins.”
In conversation with Armstrong, Edwards revealed that the visual design of The Creator was inspired by the simple idea, “What if the Sony Walkman won the tech war instead of the Apple Mac?”
“The way we tried to quickly describe the design aesthetic of the movie is that it’s a little bit retro-futuristic.”
Likewise for the insect-like robots in the film, which they tried to design as if an insect had been made by Sony. “We took products and tried to turn them into organic-looking heads. We took things like film projectors and vacuum cleaners and just put them together, deleted pieces and kept experimenting. It was like DNA getting merged together with other DNA, trying to create something better than the previous thing.”
“AI Democratizes Filmmaking”
When it comes to AI in filmmaking, Edwards is equally even-handed. He was surprised by what is already possible with AI tools.
“My initial thoughts were that AI will never be able to understand the beauty of an image but actually websites like Midjourney are pretty good. [So, then you think] soon it’ll be moving footage. And then maybe you won’t need cameras,” he told GameSpot senior editor Chris Hayner in an interview for Fandom.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“It’s going to change filmmaking so much,” he continues, way beyond the CGI-realism breakthrough of Jurassic Park. “It’s going to be a big seismic shift. My hope is that it sort of democratizes filmmaking, like it doesn’t cost $200 million anymore to go make something that’s in your head. You can kind of do it from your bedroom. But then the question is when everyone can make Star Wars from their bedroom, will there be any specialist [crafts] anymore?”
Edwards is known for using literal reminders of cinematic lodestones for his projects. Slash Film’s Jenna Busch reports that he’s hung different movie posters in the edit suite as inspiration for each film.
Which posters made the cutting room for “The Creator?” Some are predictable, others may (at first) seem like head scratchers (we’ll have to wait until its debut to see the full logic). Here they are, in order of predictability:
AI is a given for our future, but neither positive nor negative impacts are a foregone conclusion. We still have many choices to make about GenAI.
September 13, 2023
Posted
September 11, 2023
Translating Anime to a Live-Action Adventure for “One Piece”
From the Netflix series “One Piece”
TL;DR
One of the most expensive series ever made, Netflix’s $144 million live-action adaptation of bestselling manga pirate adventure “One Piece” needs to appeal to die-hard fans and newcomers alike.
Editor Tessa Verfuss discusses the editorial decisions behind the film’s action scenes, which employ dramatic close-ups and other camera angles. These provide a level of self-awareness rather than trying to be realistic and self-serious.
She also explains how she used framing, rhythm and pacing to help create exaggerated, larger-than-life sequences and hero or villain moments.
The series was shot on the ARRI Alexa LF outfitted with custom-made Hawk MHX Hybrid Anamorphic lenses to lend it a comic book-style and help add weight to certain moments.
The production was located at Cape Town Studios and employed 1,000 local crew members, many of whom had worked on Starz pirate adventure series “Black Sails.”
Netflix reportedly spent $144 million on the eight-episode series One Piece, making it one of the most expensive shows ever made. It’s also the biggest show the streamer has made in Africa. Production was located at Cape Town Studios, where it employed 1,000 local crew members, many of whom had worked on Starz adventure series Black Sails.
Editor Tessa Verfuss was one of these Black Sails alums. “There was a lot of buzz around Cape Town that there was this massive show coming in,” she told Nerds and Beyond. “The president came to visit the set, that’s how big a deal it is for us. When you hear something that big is coming, obviously you’re gonna be interested whether or not you’re familiar with the IP. And then when I heard it’s pirates, I was like, “Oh, well I’ve done pirates on Black Sails!” Sword fights, that’s totally right up my alley, but this is a bit different from Black Sails — a different vibe.”
One Piece is a live action adaptation of the bestselling manga story of all time. Debuting in 1997 in this serialized pirate adventure is about the search for the elusive One Piece treasure, led by Monkey D. Luffy. The story sprawls across 105 volumes of stories, all written and illustrated by Eiichiro Oda who is executive producer on the Netflix show.
“There’s this strong fantasy element, it’s not dark and twisty,” describes Verfuss. “It’s optimistic, joyful, funny, sincere — it’s so heartfelt, and everyone really loves that about One Piece. You couldn’t get much more different when it comes to pirate properties.”
Vincent Regan, Michael Dorman and director Marc Jobst behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Emily Rudd as Nami in Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Mackenyu Arata behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Taz Skylar as Sanji, Mackenyu Arata as Roronoa Zoro, Iñaki Godoy as Monkey D. Luffy, Emily Rudd as Nami and Jacob Romero Gibson as Usopp in Season 1 of “One Piece.” Cr: Netflix
From Season 1 of “One Piece.” Cr. Netflix
Jacob Romero Gibson behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Emily Rudd behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Aidan Scott as Helmeppo in Season 1 of “One Piece.” Cr: Netflix
Maximilian Lee Piazza as Young Zoro in Season 1 of “One Piece.” Cr: Netflix
Ilia Paulino as Alvida in Season 1 of “One Piece.” Cr: Netflix
Taz Skylar as Sanji and Mackenyu Arata as Roronoa Zoro in Season 1 of “One Piece.” Cr: Netflix
Vincent Regan as Vice-Admiral Garp in Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Emily Rudd as Nami in Season 1 of “One Piece.” Cr: Joe Alblas/Netflix
Colton Osorio as Young Luffy in Season 1 of “One Piece.” Cr: Raquel Fernandes/Netflix
Although filled with VFX, much of the budget went on large-scale sets on soundstages and water tanks. The production design department was given a head start on the massive ships required for the series by repurposing ones initially built for Black Sails.
Verfuss was one of several editors on the series, with credits for cutting four episodes. “With an anime adaptation it’s not trying to be serious or realistic,” she says. “You can use the framing, the rhythm, the pacing to kind of make things a little bit larger than life, a little bit exaggerated. If you’re introducing someone, you get to give them those hero or villain moments. You can go to the extreme closeup and it doesn’t have to have this completely naturalistic feel that you would have in a different kind of show.”
One Piece is shot on the ARRI Alexa LF with custom-made Hawk MHX Hybrid Anamorphic lenses. The show’s cinematographers, led by Nicole Hirsch Whitaker, ASC, who shot the first two episodes with director Marc Jobst, covered each scene with two cameras to provide editorial with a choice of angles and performance.
“In terms of getting those characters to really work you are looking through those takes, finding those moments, and choosing your angles,” says Verfuss.
“We’ve got these wide-angle lenses that are used throughout the show, and they make these really strong frames, which I’m hoping people feel is a bit more like a comic book style. They really help add weight to certain moments.”
Elaborating on the process in an interview with Screen Rant, Verfuss said her goal was to bring an anime style to the fight scenes.
“You have these dramatic close ups and cool angles so I think it’s bringing a level of cool and self-awareness to it [rather than] trying to be realistic and self serious. The mechanics of a fight scene are one side of it, but it’s really about the emotions of the scene and understanding the character’s motivation.
“What you want from a live action [compared to animation] is you want your characters to feel human, characters that you can root for and get behind and not a cartoon.”
Netflix needed to appease die-hard fans — of which there are millions around the world – and reach an audience who had never heard of the property.
“You have to be aware of when the show is making the conscious decision to put an Easter egg in or is paying homage to something,” she said. “Sometimes it’s very subtle.”
It is content that could end up being glossed over in given the fast-pace of the show, which is where co-showrunner Matt Owens played a role.
Owens has “a really deep knowledge of the property,” Verfuss says. “So having someone steering who can say, ‘Wait, no, there’s something important here,’ was a huge help. We also had stacks of manga in the corners of the production office that you could go and flip through, and I watched some of the anime during my research.”
The proof of their success will be whether Netflix greenlights a second season. After all, the first eight episodes barely scratch the surface of the One Piece universe.
“Hopefully we got this one right, and that fans are gonna love it and think this is the best version,” she signed off to Screen Rant, “Or if not the best version of One Piece then the best version of a live action anime that you could possibly want.”
The PPC NYC programming is produced in partnership with FMC and comprises 24 sessions designed for production and post pros.
September 11, 2023
Get It To Go: What to Pack for Production on a Glacier
From Aidin Robbins’ documentary “Why Europe’s Tallest Mountains are Getting More Dangerous”
From Aidin Robbins’ documentary “Why Europe’s Tallest Mountains are Getting More Dangerous”
TL;DR
Documentary filmmaker Aidin Robbins shares his process for making a 17-minute movie about why mountaineering in the Alps is increasingly treacherous in a 6-minute behind-the-scenes YouTube video.
The project required the three-man crew to pack as light as possible, working with a Sony A7S III, a Panasonic Lumix S5 and a DJI Air 2S drone, plus limited lenses and accessories to accommodate the climbing equipment they’d require for the trek.Robbins shares a full production gear list; find it below.
Robbins tapped guide Dave Searle to also act as a local expert on camera, breaking up his voiceover during the film, in addition to ensuring they safely got from point A to point B.
Mountaineering requires a lot of specialized equipment. So does filmmaking. Combining the two requires both special skill and a willingness to work in challenging conditions with limited gear.
Aidin Robbins was up for the challenge. ‟We were shooting in very unique conditions, very unique terrain, and photographing some unique and incredible landscapes. It was completely different from any other project I’ve ever taken on,” he says.
Accompanied by guide Dave Searle, Robbins and Eric Matt shot a sub-20-minute film documenting how melting glaciers in the Alps have made mountaineering there more dangerous.
Robbins, Matt and Searle collaborated on a detailed itinerary that would help them get through a shot list that ‟included lots of up close footage of the impacts of climate change places where glaciers have receded, meltwater lakes and streams rock falls, but also the gear setup and process of mountaineering and the beauty of these high alpine landscapes” — all in one week.
In total, Robbins says he came home with ‟11 hours of selects and the rather daunting task of turning it into a 17 minute video.”
Searle’s role as guide extended to appearing as an expert on camera and as a voiceover in the film.
To record the interview, Robbins says, they ‟had two different interview setups: one inside speaking a bit about Dave’s history and how he got into mountaineering, and then a second one outside, speaking more specifically about the mountains and glaciers, so that one we made sure to shoot with mountains and glaciers obvious in the background.” Both scenes were shot from two different angles.
Additionally, the interviews were minimally lit. The outdoor relied only on afternoon light, shielded by a ‟rock which provided a bit of shade and softened out that light.” The indoor consisted of ‟a window as a key light and used one of these little Godox light panels as a subtle little film light on the other side of his face.”
For audio, they relied on ‟a Røde Video Mic Pro on top of my camera. Not the most glamorous professional interview setup, but I think it worked,” Robbins says.
In addition to the alpine footage, Robbins added history and context with ‟archival elements, footage, photos, drawings, maps, newspaper articles,” some dating back to the 1800s.
HEARING THE MOUNTAINS
The soundtrack was also crucial to the project.
‟Music is a huge part of this edit,” Robbins says. ‟It guides the movement of the story and also informs a lot of the changes in tone as we transition between these beautiful landscapes, and the threats facing them.”
But the documentary didn’t have a soundtrack to accompany every shot. He used recorded natural sounds ‟to convey the power and volatility of these landscapes. There’s a lot of like wind noise in the background, as well as a lot of kind of rumbling sounds of rocks and ice crumbling and falling down.”
And as with many documentary films, Robbins says he ‟structured the edit around my voiceover, splicing in sections from our interview with Dave to provide a local perspective, talk in detail about the process and dangers of mountaineering, and transition between sections.”
SOME VERY SPECIFIC TIPS
In his BTS video, Robbins offers some glacial filmmaking pointers that he learned from their guide and picked up in the course of their trip. If you’re interested in a similar project or just curious about the peculiarities of these conditions, here are few takeaways:
Glacial ice is most stable early in the morning. Therefore, Robbins says he and his crew “started shooting well before sunrise.”
Conditions constantly change on glaciers, so don’t plan to rely on maps or set routes. “That presents some navigational challenges but also some really cool photographic opportunities and some glaciers have all of that but are also covered by a layer of snow,” Robbins explains.
Drones are almost necessary. “Some areas of the glaciers like large crevasses or ceramic edges are just too unstable to access on foot,” he says. “So to photograph these, we hiked as close to the edge as we safely could and then sent the drones over to capture up close footage of those features.”
Rocks can make decent substitutes for tripods. “Any time lapses you see are just balanced on a rock,” Robbins says.
THE GEAR
What’s a good “making of” video without a detailed gear list?
‟We slimmed down our kits as much as possible, each of us only carrying one or two cameras and lenses, along with a drone and basic accessories,” Robbins says.
The latest travel film by Grafton Create utilized the Sony FX6, “the obvious choice” for the low-light imagery they wanted.
September 11, 2023
The NAB Show New York Events You Can’t Miss
This year’s NAB Show New York (October 24-26) is jam-packed with educational content and hands-on learning opportunities, on and off the show floor. With so much to see and do, it can be overwhelming to make your game plan, so we curated a few must-see-and-do ideas right on the Show floor.
Insight Theater Top 5
Welcome and What’s Next, featuring NAB CEO Curtis LeGeyt in conversation with Evan Shapiro (Wednesday at 10:30 a.m.)
Video for Social Impact: A Film Screening and Conversation, featuring Amanda Needham, Becky Morrison, Gloria Pitagorsky and Rosie Pongracz (Wednesday at 4 p.m.)
The Virtual Production Revolution: A Real Time Love Affair, featuring Jim Rider, Christina Lee Storm, Arturo Brena and Ian Fursa (Thursday at 11:30 a.m.)
AI Virtual Humans in Broadcast News, featuring Marc Scarpa in conversation with Lori H. Schwartz (Thursday at 3 p.m.)
Top 3 for Photographers
Go Pro With AI: New Pipelines Change the Game for Content Creators, featuring Elena Piech (Wednesday and Thursday at 2:10 p.m.)
Fujifilm Photowalks (offered on Wednesday and Thursday at 10:30 a.m., 1 p.m., and 3:30 p.m.)
Creators of New York: A Photo Competition Case Study, featuring Nick Urbom, Ellen Frances and Creedance Kresch (Thursday at 11:30 a.m.)
Top 4 for Filmmakers and Cinephiles
The Creative Process of Visual Artist Ryan Bauer-Walsh, featuring Ryan Bauer-Walsh (Wednesday, 10:30 a.m.)
Color Accuracy: From On-Set to Post, featuring Geoff Smith, Warren Eagles and Arthur To (Wednesday at 12 p.m.)
HBO Camera Assessment Series Screening & Session, featuring Stephen Beres and Suny Behar (Wednesday at 4:30 p.m. and Thursday at 11 a.m.)
Cinematic Lighting: The Cinematographer & Gaffer Relationship (Thursday at 2:45 p.m.)
AI tools are here and it’s pointless for us to pretend they don’t exist, says post-production consultant and educator Jeff Greenberg. Yes, there’s some anxiety about the jobs AI may take away but to Greenberg’s mind what AI actually takes away “is a lot of what we don’t want to do to begin with.”
In this video tutorial from NAB Amplify’s Video Learning Lab, Greenberg gives a tour of some of the multitudes of software available to automate aspects of post-production. These are either standalone products, free or paid, or part of much larger and industry-standard packages from Adobe, Blackmagic Design (Resolve) and Apple (Final Cut).
He starts off by demoing Spleeter, an open-source tool developed by music streaming platform Deezer.
“It allows you to break down a completed song or piece of audio into stems for further investigation,” he says. “You can use Spleeter to separate audio based on who is singing, or to break down a song based on what instrument is playing. Spleeter can also help you make certain instruments louder or cut them from the song altogether.” In the video Greenberg shows you how to use Spleeter to quickly modify sections of music.
He also discusses AI tools including Final Cut video tags, Rough Cut, Wobble, auto media converter, Runway, Kognate.com and Color Lab AI. He also covers Dynascore from Wonder Ink, Neubert, Isotope, crumplepop, and Audio Design Desk, which can variously assist in tagging, shaky footage hiding, background removal, color grading, and audio editing.
For Greenberg, the biggest plus about AI tools is that they take away the drudgery. “I want to do the fun stuff where I get to bring art to the imagery.”
Some tools will reframe and reformat video in seconds. When you are typically being asked for at least two or three different deliverables the idea of smart conform has to be a bonus, he says.
He notes that the market for AI products are “constantly evolving” as the next set of tools comes out and that some will produce better results than others.
Such tools are making their way into everyday video creation, if not yet at the highest end of postproduction, then certainly for corporate presentations and videography. You may choose to use one or many on one project, or none at all, but they are all designed to jumpstart processes that will otherwise simply suck up time.
Aimed at empowering creatives, the AI Creative Summit will be held at NAB Show New York Oct. 24-25.
September 11, 2023
Evan Shapiro Amplified: Rebuilding the Industry
TL;DR
In “Evan Shapiro Amplified: Rebuilding the Industry,” media’s official Unofficial Cartographer declares that “disruption is now the operating system” of the Media & Entertainment industry.
Shapiro points to the decline of traditional “triple-play” bundles providing broadband, phone and video as one of the root causes of our so-called “media apocalypse.”
Incremental changes won’t suffice, he says. Shapiro calls for a seismic shift in business models and strategies, urging the industry to move “out of the burning house” and into new structures.
Expect a surge in M&A activity and a broader focus that includes emerging platforms like video games, which Shapiro describes as the “new form of social media.”
The power now lies increasingly with the consumer. Shapiro advises leaning into consumer wants and desires as the key to navigating the media apocalypse.
Ready to rebuild the Media & Entertainment industry? Evan Shapiro thinks it’s about time.
In the newest episode of Evan Shapiro Amplified, Media’s official Unofficial Cartographer takes us on a whirlwind tour of the current media apocalypse, as he calls it. Watch the full video above.
From the crumbling edifices of traditional business models to the burgeoning skyscrapers of consumer-centric approaches, Shapiro lays down the blueprint for the industry’s future.
“Change is now a constant, it’s second by second. Disruption is now the operating system of our ecosystem,” he declares.
But what does this mean for an industry that has been in a state of constant flux since its very inception? “Disruption has caught up to the maturation rate of the business models in traditional media. And meanwhile, big tech has gotten bigger and more powerful,” he warns.
This isn’t just a phase; it’s the new normal. And it’s not just about keeping up; it’s about staying ahead.
Plotting a new course for industry, Shapiro emphasizes the urgent need for transformation and adaptability, challenging us to rethink old paradigms and embrace new opportunities. Buckle up, because we’re not just navigating change, he says, we’re steering it.
Disruption as the New Norm
Across M&E, the winds of change aren’t just blowing — they’re gusting. “Business models are being reinvented in real time,” says Shapiro, pointing to COVID as the primary driver of accelerated content consumption.
There are a litany of disruptions that serve as case studies for the industry’s transformation, and Shapiro nods to Disney as a prime example: “During COVID, because of all this new usage… Disney caught up and surpassed Netflix in total number of global subscribers in less than three years,” he says. “And then 90 days later, they fired their CEO. That’s how quickly the whole concept of media changed during lockdown.”
He also cites the decline of traditional triple-play bundles, now being replaced by comprehensive platforms like Amazon Prime. Within sports broadcasting, Shapiro remarks on YouTube’s acquisition of NFL Sunday Ticket and Apple’s deal with Messi for Major League Soccer. Spotify’s struggle to make its low churn rate profitable is another key point on his radar, underscoring the new economic realities of the music industry. These aren’t mere anomalies; they’re harbingers of an industry-wide shift where diversified revenue streams are the new norm.
Shapiro is unequivocal: the old ways won’t cut it anymore. This isn’t a mere adjustment; it’s a seismic shift demanding a radical reinvention of business models and strategies. The media cartographer’s insights are a clarion call for the industry. The traditional power structures are being upended, and new players are entering the field. This isn’t the time for incremental adjustments; it’s a time for radical reinvention of how the M&E industry operates.
So, what’s the way forward? According to Shapiro, it’s about using disruption to fuel innovation and transformation. It’s about recognizing that the old rules no longer apply and that clinging to outdated models is a recipe for obsolescence. In essence, the winds of change are a harbinger of opportunity, not just a storm to weather. And as Shapiro makes abundantly clear, those who adapt will not only survive but thrive in this new norm.
The Road Ahead: Adaptation, Innovation & Diversification
“It’s not enough to try to put curtains on the house that’s burning down, we have to think finally, about moving out of the house and into a new structure,” Shapiro insists. Incremental changes, he observes, are what led the industry into its current predicament.
Shapiro advocates for businesses to adopt a consumer-centric approach. “How do you survive the media apocalypse?” he asks. “The answer is to lean into your consumers wants and desires.”
The media industry, he says, has spent the last 25-30 years ceding more and more power to consumers “every single day, every single month, every single year. And now the leaders of our industry are confused that consumers have so much power.”
Companies that offer a range of “must-have” services will be the ones to thrive, says Shapiro. “One of the things I think most people fail to realize about the current media apocalypse is it wasn’t due to the unbundling of cable TV. That is not the problem,” he explains. The real problem, he says, is the unbundling of what he calls the “triple-play” of broadband, phone and video as a single bundle. “The more hooks you get into the consumer’s home the less likely they are to break up with you because it’s more difficult.”
Shapiro predicts a surge in merger and acquisition activity in the coming years as companies position themselves as providers of consumer must-haves.
The industry, he says, needs to look beyond traditional forms of media. Video game platforms are one area ripe for exploitation. “This is not just a place where people play games, it’s also the new form of social media, people are going there and playing games and then hanging out and talking with friends,” he observes.
In this rapidly evolving landscape, Shapiro’s advice is clear: adapt or be left behind. “We are going to have to stop doing things because we’ve always done them, we’re going to have to do new things that have never been done before.”
Media cartographer and industry observer Evan Shapiro is set to deliver the keynote address at NAB Show New York! Known as media’s official Unofficial Cartographer for his visual charting of the industry’s continual evolution, Shapiro’s speech will center on “What’s Next” for Media & Entertainment. He’ll use this keynote to lay out what to expect in the next era of media, whether we’re ready for it or not.
Attendees can look forward to new research and insights, as well as Shapiro’s honest assessment of how the M&E industry can grapple with its next era. Preceded by remarks from NAB President and CEO Curtis LeGeyt, this keynote session is scheduled for Wednesday, October 25, at 10:30 a.m. on the Insight Theatre stage.
Our “official unofficial cartographer,” Shapiro will use this keynote to lay out the next era of media, whether we’re ready for it or not.
September 10, 2023
Data and Creativity Taste Great Together
TL;DR
To be effective, data professionals and researchers must be able to tell a story with numbers and an emotional hook, according to Maverix Insight’s Liz Huszarik.
Trends can either be “hardcore” — that you see in data and can model —or cultural trends, perhaps observed anecdotallyin research labs or on social media.
Learn more about leveraging and understanding M&E trends at NAB Show New York’s Insight Theater session moderated by Huszarik and featuring a panel of five creatives who understand the importance of data.
If you think data is boring, you’re talking to the wrong people. That’s one of many takeaways from a recent conversation with three Warner Bros. alumna: Liz Huszarik, Brooke Karzen and Tricia Garret Melton.
“I always want to understand why people do what they do,” says Huszarik, former Chief Research & Data Officer at Warner Bros.
She’s turned that desire into a profession, by decoding data for the media and entertainment industry, and Huszarik knows that effective data isn’t limited to presenting clean numbers. “If I didn’t tell a story with the data, if I didn’t make it compelling and have an emotional hook, it meant nothing to” stakeholders and creatives.
“I can take raw data and turn it into stories, and turn it into action, or action steps, that my former colleagues, Tricia and Brooke, can take based on the data,” Huszarik says.
For her part, Karzen says her three decades in the biz (including pioneering unscripted TV and creating “The Bachelor”) have proved to her that “data, research was crucial to both our ability to identify trends in the market, for developing and selling new shows, as well as keeping those long, long running franchises going.”
And Garret Melton, the former Global Kids, Young Adults and Classics CMO, says, “What we do as storytellers, how we engage with our audiences, is so driven by culture… I think it’s so important to be connected and to enter user insights and data smartly.”
You can watch our full conversation (above) or opt to engage in four parts, embedded below.
So how does one use data effectively?
“Data tells you what happened in the rearview mirror. We need to talk to consumers to understand why,” Huszarik explains. “Why did they do that? Why did they behave that way? Why did they make that choice?”
Additionally, it’s helpful to differentiate between a “hardcore” trend that you see in data and can model to predict what’s next, versus cultural trends, perhaps observed anecdotally, at first.
As examples, Huszarik shares two phenomena: cord cutting and children’s technological literacy.
The data-predicted trend, played out like this, Huszarik recalls: “We were following data points, consumer subscription to cable, and … back in 2015, we said by 2023-2024, you’re going to be at fewer than 50% of U.S. homes, having cable in the home house. No one wanted to hear that at the time. But we as an organization, as an enterprise, we had to understand that and what that meant in terms of models.”
On the other hand, Huszarik references an interaction at one of the fan research labs. “This woman said, she was talking about her young, young kids, ‘they can swipe before they wipe.’ … First of all, you’re creative. But second of all, was, like, what is she saying? They’re taking control of a device; they’re never going to have to wait until something is on at a certain time, a certain day of the week. They’re taking control of all of their behavior … while they’re still in diapers, right? … This is where we’re headed.”
Of course, Huszarik notes, “The observational trend, you’re going to try to tease out or substantiate with data. And you know, we do that, but it’s first when you hear it, right, based on a value or perception. But the data driven trends where you can really see and then predict hardcore, are critical.”
But just having the data doesn’t mean you’ll know what’s coming down the pike. Neither does it mean that the decision makers will accept what you’re telling them. But sometimes, data helps save the day (or at least make the right call)!
TV Trends Done Right
The COVID-19 pandemic presented a lot of challenges for television production. It also caused some changes in consumer sentiment, Karzen explains.
She recalls, “We began to realize, both via social media, as well as through information that we were sourcing through Liz and her team, there was this desire, this need, this hunger for nostalgia, which is comfort, by the way.”
To capitalize on that feeling, Karzen says, they looked to Warner Bros. extensive library of titles and franchises. In particular, she cites the popular reunion shows. The “Friends” reunion had been in the works pre-2020, but they also added reunions for Harry Potter, “The West Wing” and other popular franchises “because of the comfort it gave” audiences to tune in.
An earlier trend, Karzen says, resulted in the creation of “Little Big Shots,” hosted by Steve Harvey. They observed that Ellen DeGeneres’ TV clips would go viral whenever she had a cute kid on the show. They spun that idea off into the program which was a 21st century version – also capitalizing on nostalgia – of “Kids Say the Darndest Things.”
Garret Melton also observed nostalgia as a key trend in her sector. It manifested as “parents [who] really wanted to reconnect with their kids” by sharing their interests with their children. For her team, the franchise that they chose to play up was DC Comics and Batman. But rather than a reunion show, they produced all new programming, “Bat Wheels,” targeted to preschoolers (and their caregivers).
“Parents — and dads in particular — were so excited about the opportunity to bring their love for Batman to their four year old, right, in an age appropriate way,” Garret Melton says.
For another of Garret Melton’s properties, TCM, the nostalgia trend, surprisingly, did not work magic. Instead, it leveraged what she calls “recontextualization and the reframing of looking at older films or older movies through a new lens today.”
She says this trend helped to modernize and make relevant the legacy brand, connecting it to new viewers. Garret Melton explains, recontextualization “led us to reposition the brand, where it’s about where culture and classic intersect; led us to a new tagline ‘where then meets now;’ and led us to create a new programming event called ‘Reframed,’ where we played, you know, movies like ‘Breakfast at Tiffany’s’ … But we put context around it. And that was really powerful. Or audiences really, and, frankly, press really responded to that.”
Leveraging Data
Of course, having this information in the right hands doesn’t mean there will be a foregone conclusion.
“I do not believe facts, stats and data change hearts and minds. And I believe you must take that data and tell a compelling story that connects … on an emotional level to actually penetrate and ensure they hear and they leverage your data,” says Huszarik.
To put it another way, she says, “If you cannot edit and tell a simplified, emotionally charged story, then your data doesn’t have the value, the punch and the potential that it might.”
Garret Melton clearly took Huszarik’s message to heart and shared what she calls a politically incorrect parable: “Data, research, is like a lamppost, right? And there are three ways that people tend to use it. One, they use it like a hooker, and they just lean on it. Two, they use it like a dog would, and they piss all over it. Or three, they use it as it’s intended: for illumination.”
There is another option for leveraging data, or at least, another way of looking at its use.
Karzen puts it this way: “Are you going to use that information as a tool or as a weapon?” Data could be weaponized in the boardroom, for example, “ if they were looking for a reason to kill you[r show], be honest.”
Ideally, though, she says that her team would utilize data as one of many tools. And she would take time to soak it in, to feel it rather than analyze the numbers. “The key is moving forward with the things that feel positive, but also don’t use it as a weapon to destroy something that shouldn’t be destroyed,” Karzen says.
In order for that data not to be weaponized, Huszarik emphasizes the relational nature of her work, saying she offers guideposts and support for the executives who are making programming or marketing decisions.
For her part, Garret Melton says, “I think it’s great when research, when data confirms what we believe. But … we need to know what not to do. I actually like it when insights come back that surprise me.”
However, that surprise shouldn’t keep her in the dark forever. “It’s not enough just to tell me, Well, that didn’t work. Help me understand why. Because often, it’s like one little thing, that if you just tweak a certain way, all of a sudden the reception is completely different.”
In order to be ultimately effective, Huszarik says, “There has to be a great deal of trust and collaboration between the research and data teams and the creative teams.”
That trust is earned because “we’re partnering up at the very beginning. And then throughout. Now, I’m not bogging them down with all the numbers and the nuances. But we’re having high level business questions, strategy questions,” Huszarik explains.
Predictions and Hopes
When asked to look forward and forecast exciting or worrying trends, Karzen hedged a bit, noting the uncertainties brewing from the ongoing Hollywood strikes.
However, she ultimately said she was excited to see increased “diversity within storytelling,” behind and in front of the camera, and she hopes that trend continues and expands. For example, two unscripted shows, “Love Is Blind” and “The Ultimatum,” Karzen says, “are both becoming huge hits on Netflix. Those shows, those formats, are allowing for more diversity in the cast.”
However, Karzen is “concerned now that this is going to stop because people are taking chances. Now, they’re betting on the sure bets. I’m afraid of that trend… we’ll take fewer chances. And that’s because of the economy and because of the budget and the impact of what’s been going on from, you know, pandemic and now, you know, the strikes.”
Garret Melton also says she’s cautiously optimistic about increased opportunities for telling “stories we’ve never heard before. Look at the phenomenon of the ‘Barbie’ movie … it was a story told by a woman, about women, about feminism. And the takeaway from that for me — and this actually does get into sort of the research and the data of it all — the takeaway for that, for me, is not that the world is dying for more movies about toys. It’s more movies from [women], about women and women’s issues.”
And again, in the same vein as Karzen, she’s concerned about “our over dependence on franchise” and worried “about the commoditization of entertainment. And that we are seeing a lack of willingness to take risks on truly original” content. After all, “What if nobody been willing to take a chance on ‘The Bachelor’ when it was first pitched? That was revolutionary.”
Huszarik also worries about “potentially truncating opportunities and shutting off voices.”
Additionally, she follows the data and predicts that short form content is the next big play. For kids and families, the “number one platform is YouTube, the number one content is short form. Studios have still not unlocked that.”
Media cartographer and industry observer Evan Shapiro is set to deliver the keynote address at NAB Show New York. Known as media’s official Unofficial Cartographer for his visual charting of the industry’s continual evolution, Shapiro’s speech will center on “What’s Next” for Media & Entertainment. He’ll use this keynote to lay out what to expect in the next era of media, whether we’re ready for it or not.
Attendees can look forward to new research and insights, as well as Shapiro’s honest assessment of how the M&E industry can grapple with its next era. Preceded by remarks from NAB President and CEO Curtis LeGeyt, this keynote session is scheduled for Wednesday, October 25, at 10:30 a.m. on the Insight Theater stage.
Our “official unofficial cartographer,” Shapiro will use this keynote to lay out the next era of media, whether we’re ready for it or not.
September 8, 2023
Is AI Opening Doors for Creatives or Devaluing Art (or Both?)
TL;DR
AI isn’t necessarily your friend or your foe. It’s the basis for a new set of tools that creatives can use, if they choose to explore its potential.
Photoshop and other editing programs have utilized versions of AI tools (e.g. Spot Healing Brush) for much longer than the general public has been using the term “AI.”
One way of looking at AI is to consider the idea that all art, all creativity is in a sense generative. Large language models are in many ways modeled after the way humans view the world and create things in it.
We may be entering the age of curation. AI will remove barriers to creation, but that doesn’t mean it will generate content with vision or good taste, which may be what differentiates creatives in this new art economy.
Is your general attitude toward technology predictive of your sentiment toward AI? Maybe.
First, it’s important to establish that no one in this conversation is a technophobe nor an AI-disastrist. In fact, Naik says that he came to photography via his love of technology and gear and that same enthusiasm has helped him to embrace AI tools.
“When AI started to slowly roll about, it brought back this full circle moment, where I realized that I need to go back and start embracing technology and staying on top of that to ensure that I can bring my vision to life,” Naik says.
In the world of Midjourney and DALL-E and more, Naik says, “Now there’s less and less of a barrier in order for us to actually show what’s in our mind. And that’s opening up a whole other spectrum of people and bringing them into this visual and digital art side.”
Doc Rock agrees, noting that AI tools are in a long line of technology that have helped people overcome gatekeepers who’d prefer certain arenas to remain rarified and inaccessible. AI, Doc Rock says, now “makes it so that anyone who has an interest in the process can” participate, and he thinks that increased accessibility to storytelling tools will be good for humanity.
However, Naik doesn’t only see blue skies ahead. He understands that disruption will lead to job losses for some: “I’m excited to see where that leads us but also, understand, very scared as well for creatives, especially when it comes to jobs, when it comes to, you know, being able to seek out the people who actually have a talent of doing everything manually and seeing how good they are at understanding the tools that we’ve grown up with, versus somebody who’s just come into the field and typed a few prompts and got something really beautiful.”
Doc Rock answers fears about AI taking away opportunities by focusing on the continued human involvement: “This is what a lot of people are misunderstanding about the AI. The AI is not this nebulous cloud of nothing. It has been backfilled with actual knowledge from actual people who actually wrote all this stuff.”
Jirsa predicts creatives will take on a new role: “I think humans, over the next 10 years, our job is going to be curation. Our job is going to be not so much the technical skill, but curating the experience. But at a certain point, I do believe it comes down to just plain authenticity.”
Of course, that doesn’t mean those who ignore technological advances are in the clear. Doc Rock says, “I think if you’re worried about your job, I will give you my favorite line ever. AI will not take your job, but somebody using AI absolutely will.”
To mitigate that, Doc Rock says, creatives should try out the tools and get firsthand experience before declaring themselves in the anti-AI camp. It’s important, at a minimum, to have a general understanding of what you’re afraid of and why, he thinks.
Naik encourages us to keep “your mind open to the possibility [of AI being helpful], so that if in the future, you need to use it, you have it.”
AI FOR GOOD
Jirsa is excited about using AI tools to automate or decrease dull tasks. He says he’s excited about “workflow possibilities” and eliminating “the things that we don’t necessarily want to be doing,” citing color grading as one tedious example.
“One of the things that like Midjourney has enabled us to do is actually cut down on production time,” he explains. “So for instance, instead of actually taking the model and going into the woods, we can actually shoot it in studio, use Midjourney to create very elaborate backdrops and composite it, saving a lot more production time than actually going into the space, trying to find the exact, perfect location.”
And in his own professional life, Naik says AI has rescued photographs he’s taken and cropped for social media, bringing new possibilities to life via upscaling.
DIGGING INTO THE DEBATES
But how is AI likely to affect the value of our art, or our understanding of what art is? Will we differentiate between purely human-created and AI-generated art? Maybe.
Naik cites “AI Drake” as his turning point from believing that humans will continue to value the effort of making art. He thinks the end product is really what will continue to matter, not the process of creation.
“There is no fighting against [AI-generated art] because the result is so freakin’ good. And it’s only going to get better,” Jirsa says.
He realized much of these debates amount to inside baseball. “Most people probably don’t care. In the long run in our little bubble, we care because we are artists,” Naik predicts. “But for a lot of people out there, the general public, as long as he gets that final goal in a shorter amount of time, that is ultimately what matters.”
But Naik doesn’t think that’s necessarily a bad thing for creatives and for art. “In this overarching conversation, the vision, and curation, is what is going to stand out amongst everything. So we need to really harness that [AI is] bad news for people who have bad vision and horrible tastes.”
Van Johnson notes that these conversations aren’t really new. The specifics have changed, but the concerns haven’t. From the dawn of photography, every advance has had detractors who say that new tools cheapen the art, whether going from black-and-white to color, adding in Photoshop augmentations, even shooting RAW vs JPEG files.
Besides, Naik argues, “ AI has always been in our tool set. Why are people complaining now? For example, we’ve always had the neuro filters in Photoshop. People never really said much. Then we’ve had the Spot Healing Brush, which is AI driven… but doesn’t make it any less ours.”
We’re “rearranging chairs for no reason,” Van Johnson says, “because the bigger picture is the content.”
But what about the role of ethics here? Is it really OK to use AI to emulate a certain style?
“A human can learn how to paint just like Picasso; you could spend a lifetime figuring it out. And now you can do it, the machine can do it much quicker, does it make the machine wrong?” Van Johnson asks. “Because it can do it faster? I don’t know.”
Jirsa is in the camp that believes very little, if any, artwork is truly original. Even before the advent of AI, he says, “every piece of artwork was already generative. It was already an amalgam of like, everything that you had seen.”
As a photographer, Jirsa says, “I’m just synthesizing everything that I’m seeing and putting my own spin on it.”
Naik agrees: “Subconsciously, we’re just a large language model that doesn’t create the source,” meaning the things and art that have influenced us as artists and creatives.
Thrown into sharp relief by the Hollywood strikes, the debate over human creativity versus generative AI has become a battleground.
September 6, 2023
Domenick Satterberg: Shooting Sports as Cinema
Domenick Satterberg’s cinematography from a recent XFL game
TL;DR
Sports cinematographer Domenick Satterberg dives into the philosophy of shooting NFL games with cinema-style cameras including lens selection.
Whether you are a budding cinematographer or a sports enthusiast, this video will provide valuable insights and behind-the-scenes look into the world of sports cinematography.
Netflix series Quarterback featuring Patrick Mahomes and video game Madden are discussed in relation to using cinematic techniques.
It seems remarkable, looking back, that NFL games were once recorded on 16mm film. Even more remarkable that every match since 1962 has been filmed by a specialist cinematography camera crew by NFL Films, the production unit of the league. The techniques they pioneered have recently been co-opted into the live broadcast and have gone mainstream with the rise of the behind-the-scenes sports documentary.
In this video presentation (viewable below via NAB Amplify’s Video Learning Lab), veteran sports cinematographer Domenick Satterberg charts the history of NFL Films and discusses the camera techniques and technologies he’s used.
Until 2013, NFL Films was still shooting 16mm footage of every NFL game for production of cinema-style game highlights that still form a key part of the league’s marketing.
One of NFL Films’ founders, Ernie Ernst, explains in an archive clip: “When we started NFL films, there was something that I thought was missing in all sports cinematography.
“I wanted to get the storytelling shots of the way that the sun came through the stadium, the cleat marks in the mud, the bloody hands of a player. We had other cameramen who are great action photographers. But to me, I wanted to get those little details that, added to the action, would flesh out the story.”
image courtesy of Domenick Satterberg
In 2012-13, the NFL Films crew — usually just a two-camera operation per game — “were the oddballs with light meters on the sidelines shooting film. I want to say the NFL was spending $50,000 a week just on film.” That included processing the footage in a lab before digitizing for distribution.
Since that season, digital cine cameras have been used, and the workhorse then and now, for Satterberg at least, is the ARRI Amira.
“The Amira is built for documentary shooting,” explains Satterberg. “It’s shoulder mounted and has the same sensor as an Arri Alexa. We’re still shooting the Amira because everything we shoot is still 1080p. No need for 4K. We shoot 8 terabytes every Sunday with footage transferred via fiber from every NFL stadium back to New Jersey for postproduction.”
Teams shoot at multiple frame rates across the game, including 24 but also 30 frames, 48 frames, 60 frames up to 120. The unit has eight in-house staff cinematographers and around 60 freelance shooters across the country.
Getting the Shots
Of course, it is extremely hard to follow football, a point that Satterberg repeatedly makes. Only with experience and experimentation can you really get the shots you need. Bearing in mind that there are just a couple of cinematographers working the game. Shot selection is essential, as are the lenses required to capture those cinematic close ups and slow motion shots from the touch line.
“The Amira truly is the best sports camera because of its eyepiece. I give credit to anybody who could pull focus on a football, or any kind of flying object, on a monitor or an LCD screen.”
image courtesy of Domenick Satterberg
According to Satterberg, the best lens pairing with the Amira for shooting a Super Bowl is the Fujinon 25-300mm cinema lens. He also uses an adapter that expands the image to Super 35. “It’s a great adapter if not ideal, but it’s what we use at every NFL game.”
The long zoom range allows him to shoot medium-wide to long- shots without changing lenses. It is, nonetheless, a heavy set up, which Satterberg operates with no focus puller. However, he has customized a focusing setup that helps him to achieve perfect focus.
“The pure size and weight of this lens has its drawbacks, but I can easily overlook those flaws because of the sharpness and quality this lens produces.”
Nevertheless, it’s very difficult shooting ENG-style with it, and hence, this setup can be mainly utilized for static shooting, but you have to know what you’re doing.
“With the ENG zoom, [I just use] minimal taps on the zoom on the focus. To get you where you need to be, so you can see that ball flying through the air. It’s so minimal. It’s muscle memory at this point. I’ve got a pistol grip underneath the zoom rocker. So, I’ve really dialed in that Amira, now that I own it, to just fit perfectly on my shoulder. It’s all about balance.”
Satterberg won’t be changing up to shooting full frame anytime soon. “I know the Alexa LF is a great camera for the motion picture industry, but we need lenses that get out very, very far. And the lenses that fill those full frame sensors need to be extremely large,” making them too heavy and unwieldy for shoulder mounted work over 90+ game minutes]
Capturing the Details (and the Emotion)
We also hear from Hannah Epstein, who works with NFL Films shooting a variety of work on shows, games, events, and specials. Her style is to capture the game with a lot of attention to non-game highlights.
“It’s less specific plays or moments,” Epstein says. “I like to focus on really tight elements and just get facial expressions or hands or sweat; the emotion after the play, or before the play, eyeballs looking over the line of scrimmage. I like to play with negative space and use the crowd in my shots. I just love capturing the details of the game that puts you inside the game in a different way than anyone’s able to see on regular broadcast or from the stands.”
Some of Satterberg and Epstein’s work may feature in forthcoming Netflix eight-episode docuseries Quarterback. It follows three of the biggest quarterbacks in the game throughout the 2022 season, giving an unprecedented look at what it takes for the Kansas City Chiefs’ Patrick Mahomes, the Minnesota Vikings’ Kirk Cousins and the Atlanta Falcons’ Marcus Mariota to succeed when all eyes are on them.
Satterberg says they shot with Amiras in HD and speculates that Netflix has upscaled the footage to 4K for its platform.
Video game developers EA Sports also hired an NFL DP to teach them about cinematography when making Madden NFL, the hit American football video game series.
Satterberg explains they wanted to learn about shallow depth of field and how to control cameras shots racking between focus and out of focus.
“They literally handpicked an NFL film cinematographer and put them on staff and said, ‘We want it to look pretty.’ And so that then evolved to Fox, CBS and NBC going with their full frame shallow depth of field cameras on the field of play as part of the live broadcast. They do us a lot of autofocus, because…it’s there, use it. They’re handing it to guys who are traditional shoulder-mounted shooters.
“For the first couple years, it was kind of clunky. But I think they’re really getting the hang of it now, and they’re replacing full Steadicam rigs with Gimbals and full frame mirrorless cameras. It’s pretty amazing.”
Getting ready to plan your journey at NAB Show New York? You won’t want to miss this opportunity to explore the synergies between live broadcast and cinema with the Cine+Live Lab!
This destination features a variety of educational sessions and production demonstrations centered on the trend of translating cinematic techniques to live broadcast productions. All sessions and demos are open to all-access badge holders, but off-site bonus programs may require prior registration.
Live concert director Paul Dugdale explains his approach to capturing live performances, including how he captures a performer’s energy.
September 5, 2023
AI: How Media Pros Are Moving from Panic to Practicality
TL;DR
We need to take a longer-term view of AI as a tool to speed production but not one that replaces human talent, a panel of experts say.
The industry is hesitant about using AI because of ethical and copyrights concerns, but perhaps blockchain technology could help track and verify content.
In just a decade we could have AI-driven real-time personalized storytelling.
Pragmatism has replaced fear as the sentiment most likely to be directed at generative AI by creative media companies and tech developers.
“If you’re not using AI it’s sort of like saying you’re not going to have a mobile strategy,” says Jeremy Toeman, Founder, AugX Labs. “AI was commoditized in less and under a year and now it’s just as much a building block for doing things than anything else.”
Like Toeman, Steve Vonder Haar, senior analyst for Intelligent Video & Enterprise at IntelliVid Research, is taking a long-term view:
“In a decade we will not be using the term artificial intelligence at all because it’s not really descriptive of the value that these capabilities are delivering to the marketplace, just as was the case in the early 1990s when ‘information superhighway’ was in fashion. As we move forward, the term AI is going to fade away.”
All these experts agree that AI helps by speeding up processes, but that if users really want to make creative content with value then AI only gets you part of the way. It may automate 80% of the previously manual process, but human skill, knowledge and taste making is crucial for finesse and polish.
As Mobeon CEO Mark Alamares puts it, AI enables teams and individuals to amplify their capabilities to make a much more efficient production process overall. “AI will enhance what we’re doing both on a creative and technical level, in the broad sense,” he says.
Vonder Haar likens generative AI to “getting people past that blank sheet of paper.”
Toeman says his colleagues use ChatGPT to evolve their own strategies and question their own thinking, resulting in higher-caliber of outcomes than they would otherwise achieve.
The downsides of reliance on generative AI include stifling independent creative thinking and learning.
“I still remember when I used to know phone numbers. And then, thanks to Google, we don’t know facts. And now, thanks to GPT, we might not know anything,” says Toeman. “That’s obviously hyper cynical, but there’s somewhat of a concern.”
Alamares reports a lot of hesitancy in the creative industries, because of ethical concerns, but most of the panelists are pessimistic that anything can be done to concretely vet every copyright infringement or deepfake.
“For me, it’s a garbage in-garbage out,” says Vonder Haar. “If you’re going to take information from the web, then God bless you, because you’re not going to have a trusted source of information from which to draw.
“The real future for AI in the business sense is going to be in the development of limited datasets that are used to inform decision making within a specific corporate network or a specific realm of individuals.”
There is some discussion about whether regulatory bodies like MPEG could devise a scheme to watermark video directly into video codecs as a way of tracking and verifying content. Vonder Haar suggests that Vizrt’s NDI video-over-IP protocol could be used to the same end.
“NDI is relatively widely distributed among devices [and they] would also have the opportunity to create some sort of standard that would help in this type of watermarking of real content.”
Perhaps blockchain technology holds the most promise for differentiating real from fake video.
“If you’ve got blockchain-certified video that was recorded from a blockchain-certified camera you can know that that’s the original,” says Toeman. “I think that’s going to be hopefully our savior through this all, I don’t know.”
On the storytelling side, AI will be able to jumpstart creative thinking and fast forward the scripting process but there’s a belief that traditional storytelling talent will rise to the top.
“The point is ChatGPT drives toward the norm, toward the middle, toward the average,” says Toeman. “So you use it to do average things. If you want to be a great storyteller, you will write a better prompt to get more out of it using it, but all the tools are kind of the same from this perspective.”
The consequence, he says, is that reliance on generative AI tools alone will only churn out very average mediocre content. The flip side is this trend will enable master storytellers to use the tool to shine.
“A great example might be Jon Favreau, who made the first two seasons of The Mandalorian using a lot of new tech to make that storytelling cheaper, faster, easier. But it’s not the tech that made the story.
“So, if you can write a very clever prompt for your unique story angle, and then add your own special sauce on top of that, that’s where [AI is going].”
Corey Behnke, co-founder and producer at LiveX, says he believes there will be more demand for producer oversight and moderation of AI than ever before when it comes to live streaming.
AI could help with 80% of the mundane tasks in video production but that still leaves 20% for actual humans to get the product right.
“The place where you are going to deliver your value is that last 20% of broadcast quality video; that’s [what] separates a basic piece of video from a high polished piece of content,” agrees Vonder Haar.
“Because so much more content will be developed there will be more opportunities for high-end producers. The folk who were only going to be in trouble will be those who [operate] at the very basic levels of video production rather than the high end of the market space.”
Asked where he thinks AI will be a year from now, Toeman anticipates we will start seeing the first AI-generated content but no one will like it: “That’s my hunch. I think the content industry stays away from AI for [maybe] five years and by only then we’ll start seeing it used sort of the way CGI showed up in movies.”
In a decade we could have AI-driven real-time bespoke storytelling: “Make me a video in the style of Harry Potter about a sci-fi wizard on an asteroid, and I want it to be 90 minutes long, and make it seem like it’s written by Quentin Tarantino. I think that’s 10 years from now, tops.”
If you’d like a preview of our AI Creative Summit, workflow specialist and conference program manager Gary Adcock is your guy.
August 30, 2023
Posted
August 29, 2023
Where Fantasy Meets Reality: Producing the Mount Doom Sequence for ‟The Rings of Power”
From “Lord of the Rings: The Rings of Power,” Cr: Amazon Studios
TL;DR
“Rings of Power” director Charlotte Brändström and producer Ron Ames wanted the show to feel grounded in reality for both the actors and the viewers. They tried to capture as much of the series in camera as possible, rather than rely on special effects.
The implosion of Mount Doom is a pivotal scene in the series, and the event and its aftermath required massive interdepartmental cooperation to pull off. The creative team also wanted it to be based on real-world physics, despite the Middle Earth setting.
Season one of “The Rings of Power” is available on Amazon Prime. Season two filming began in October 2022.
Amazon Studios’ “Lord of the Rings: The Rings of Power,” is a fantasy series based on JRR Tolkien’s ‟Silmarillion” and other Lord of the Rings mythology texts.
For episode six and seven director Charlotte Brändström, that meant they couldn’t rely on sci-fi tropes to accomplish any natural phenomena in Middle Earth. This world might have magic, but physics applies to it, as well.
This was especially important because these episodes feature the explosion of Mount Doom and then devastating aftermath in the Southlands.
“[I]t was all based on reality. Now, we believe that the depiction of magic, or supernatural events are descriptions of science that we don’t fully understand. It’s bent physics,” producer Ron Ames told Variety’s Making a Scene. “So we wanted to make sure that the audience is swept with the reality of it. And that it is powerful as can be.”
Additionally, director Charlotte Brändström knew that both episodes — which she said were more like a feature film than standard TV — needed to be anchored by Galadriel’s viewpoint. “Whenever I get a script or a scene, I look at perspective,” Brändström told Variety. “That’s really important because it gives a point of view for the audience to identify with; you had to become Galadriel’s point of view because she’s sort of the thread throughout.”
Watch ‟Rings of Power” creators break down the Mount Doom machine for Variety’s Making a Scene series.
So how, exactly, did they pull it off? For starters, some good, old-fashioned (television) magic.
Ames admits it was daunting. “This is like making [special effects] for feature films. It was huge. This was one of the key moments that we knew we were going to be building the first challenge and creating an event that is included in nature.”
PLANNING AND LOTS OF RESEARCH
First and foremost, they had to create Mount Doom, pre-eruption.
“How are we going to actually, physically, make this believable and not look like a cartoon?” Ames says he wondered. This was additionally complicated because “we knew that people have a vision of what” this mountain looks like. To stay true to that, he says, they “wanted to actually take what [other creators] had done, and then make a very clear interpretation in our world.”
That meant, Ames says, “we went immediately to research, and we started looking at descriptions of Pompeii, Mount St. Helens, so we were very respectful of Tolkien’s descriptions, his drawings, the sharp peak in nature of the mountain itself. We were also inspired by Peter Jackson, his work and what had come before.”
Once the mountain design was hammered out, they had to determine the tone and the context for the eruption.
“It was important for us to create this environment that felt sunny, felt warm, colorful, to then make the horror of what is happening,” cinematographer Alex Disendorf told Variety.
After all, Brändström says, “Film [is] always about contrast. You want to destroy a beautiful world; you want to damage something that’s beautiful. So you wanted everything to be lush and beautiful. They were cheering and drinking and laughing and enjoying themselves, when suddenly everything around them started falling apart.”
But, again, the team wanted the audience to understand and anticipate what was happening before Galadriel and the other characters would have any idea what was coming.
“We needed to understand that there was this key that we had been teasing throughout season one, a broken sword. And by putting it in this stone and turning it, it would essentially unlock the dams that would then allow the water to flow underground into these tunnels that… we’ve seen the orcs digging these tunnels. We haven’t known what they were for, you know, allowing the water to flow through under the village, [eventually] hitting the magma and lava in the mountain and causing the explosion,” Ames says, emphasizing that the whole sequence was based on a practical machine design as well as the science that would actually set off such a reaction.
From “The Lord of the Rings: The Rings of Power,” Cr: Amazon Studios
The waterworks, they decided, was “basically a dam that had been built in and disguised into the side of the mountain” to look like a small waterfall. Ames says, “that actually is photography of a real body of water.” But the final scene itself features work done between four different vendors: Wētā FX, ILM, Rising Sun, Method — “and even for a couple of shots of de-neg, all the vendors worked together to create a unified vision of that event. … Everybody was, like, all shoulders to the wheel to make this thing happen.”
And the destruction of the village? They didn’t rely on special effects to make that happen.
“We did it for real,” explains Disenhof. “It would have been a shame to just focus on little close ups of the ground as it was exploding. Because these things shot up in the air for real 100 feet in the air.” He also noted that the wide establishing shots helped place the geography.
For Brändström, the advantage of the practical effects was “to get better reactions from the actors and the characters and the extras.” In fact, she says, “The first time we did the water explosion, I don’t think anybody expected it to be so big, to come up so much.”
The real-life waterworks, Disenhof explains, “allowed the visual effects team to take that and augment it to an even greater level of destruction. It’s a spectacular thing to have in camera, and you know, to have the extras running around and getting soaked and our actors getting soaked. And [that’s] there’s something you can’t replicate in, if it was pure visual effects.”
But, as viewers know, the geysers weren’t the end of the chaos.
“This huge wall of ash was approaching, or blocking out the sun, that made it feel different from the sunny, bright day,” Disenhof says. “It was really about scheduling the shot later in the day when she was in shade. And then, in the color correction, we could push that even further.”
In order to create the aftermath, they would need an indoor space to shoot.
“We really needed to be inside because it’s just impossible to control smoke like that in an exterior location. And on a scale of one to 10 the amount of smoke in the air, this was, you know, 11,” Disenhof says.
The crew repurposed an old indoor arena at a horse barn. Disenhof remembers, “I created a massive softbox all the way around the set using sky panels, you know, very, very, very soft ceiling. We then also surrounded the set with muslin white muslin instead of green screen or blue screen.”
He says it took weeks and “probably three or four rounds of testing with the camera and with ash in the lights to get the shade of red that I wanted.” He was, unfortunately, inspired by a wildfire he had experienced in Oregon, which he says, “ turned the whole sky this color that you really see in the show.”
But how much of this did they manage in post? About “85 to 90% of what you see there [on screen] is in camera. It’s an amazing feat from so many different departments,” Disenhof says.
“I learned a lot about visual effects, special effects, on the whole collaboration, putting small pieces together,” says Brändström.
SOUND DESIGN
There’s been a lot of talk about the visuals of “The Rings of Power,” but the sound design team also more than earned their kudos in episodes six and seven.
“If it wasn’t for such deep, thoughtful production design, the sound can’t really reach those levels,” supervising sound editor Robbie Stambler told Variety.
For example, “Within the dam, there’s all sorts of ancient engineering, and you never really see it. There’s one shot where a stone is sort of retracting, when he puts the sword, and you can see … him acting in the moment, listening to the mountains around him.”
Ultimately, Stambler says of this scene, “Telling that sonic story off screen, without, you know, lines to color in. It’s just pure fun.”
Maybe equally fun, but certainly more dangerous, was crafting the right noises to accompany the explosion and the lava.
“When the lava is shooting out of the volcano, and it’s landing amongst our heroes, almost like bombs going off,” Stambler says. “We needed to create sort of an explosion impact sound, but it couldn’t just be your average, every-day TNT or even Earth gravel sort of displacement explosion. It needed to have a different element to it. So what I did was, I took the sound of recordings of a wet sponge being dropped into a deep fryer, which is an incredibly dangerous thing to do.” (He’s happy with the end result, but advises the folks at home should not try it.)
After the initial volcanic explosion, a cloud of ash and debris heads down to the village. Within the cloud, Stambler notes there small thunderstorms brewing and his team also leaned into
“high frequency shrieking, shrill wind sound and feeling you get closer to you and closer to you.”
But as the cloud approaches Galadriel, “if you listen to the mix in that moment, we get rid of, and fade out, all of this sort of crowd, ambient terror reacting to what’s going on around over… And even the sort of debris, that stuff kind of goes away. And you’re just left with this all enveloping screaming, shrieking wind that’s traveling at you. And then you hear her breathing and sort of preparing for what’s about to hit her.”
Stambler says that Galadriel’s breathing is prominent in the mix as “an incredible emotional through line, and it’s so potent amongst the really epic music that [score composer] Bear McCreary wrote for that sequence.”
At the beginning of episode seven, when Galadriel opens her eyes, the sound is completely different, as is the world covered in ash.
To achieve that sense, “I remember asking Robbie [Stambler], then in LA, to really make everything very subdued, as if it’s a world covered in cotton,” recalls Brändström.
Stambler notes, “Silence is the loudest thing you can do in a film.” He says that Brändström used “The Hurt Locker” as a reference for how this scene should feel and sound.
To achieve that vibe, Stambler says, audio engineer “Beau Borders and I created a really fun tool, which would utilize the Atmos ceiling speakers, where we took the sound of a multi-tap delay effect, and we would pan it into the ceiling, and then it would kind of like animate in the pan. So it would kind of echo out over you in this sort of dreamlike state… We had used that technique a handful of times, and it kind of became a sonic motif for you know, dreamy, weird surreal, abstract moments.”
And then there’s the fire. Where there’s on screen fire, there’s the sound of a flame crackling.
“We create all these layers of flame, so that in the mix, Beau [Borders] could pan it specifically to where you’re hearing it on the screen. And then of course, there’s dynamic fire, of, like a torch, moving across the screen, which we would then animate that pan,” Stambler says.
Season one of “The Rings of Power” is available now on Amazon Prime. Season two‘s release date has not yet been announced, but filming began in October 2022.
VFX producer Jesse Kobayashi showcased the cloud-based workflow for “The Lord of the Rings: The Rings of Power” at NAB Show.
October 23, 2023
Posted
August 29, 2023
How Will This All Work? The Impact of Generative AI on Media Technologies
TL;DR
Industry experts discuss the transformative role of generative AI on media technologies at Streaming Media Connect 2023 in a panel discussion moderated by Streaming Media contributing editor Nadine Krefetz and featuring George Bokuchava of Tulix, Globo’s Jonas Ribeiro, independent AdTech consultant C.J. Leonard, and Darcy Lorincz of the Barrett-Jackson Auction Company.
The panelists stressed the irreplaceable role of human expertise, advocating for a multi-disciplinary approach in AdTech development. They also advocated for a balanced approach to training large language models on open versus enterprise data.
Automated digital product placement emerges as a new frontier, with Globo using AI to integrate products seamlessly into scenes.
Generative AI could also power what Bokuchava calls “dynamic ad generation,” creating real-time, hyper-relevant ads based on current market conditions, global events and other data points.
Imagine a world where AI not only assists in content creation but also plays a pivotal role in monetization strategies for streaming media. That future isn’t far off, as shown by a group of industry experts gathered for a panel discussion, “How Generative AI Will Impact Media Technologies,” at Streaming Media Connect 2023.
The conversation spanned the gamut from the use of public versus enterprise data to the transformative potential of generative AI on the advertising technology space.
Moderated by Streaming Media contributing editor Nadine Krefetz, the panel featured diverse viewpoints and approaches to harnessing the burgeoning technology. George Bokuchava, CEO of digital distribution platform Tulix, delved into the complexities of using AI to implement encryption and digital rights management (DRM) systems. Jonas Ribeiro, digital products, platform and AdTech manager at Globo, discussed the cautious approach needed when utilizing open data. Advertising technology and operations veteran C.J. Leonard, owner of MAD Leo Consulting, highlighted the time and cost-saving aspects of generative AI in advertising and content creation. Darcy Lorincz, chief technology officer at Barrett-Jackson Auction Company, offered his insights into how the renowned auto auction house is employing generative AI to create high-quality videos and other data-rich assets for promoting their vast inventory of collectors cars.
While the panelists were unanimous in acknowledging the transformative power of generative AI, they also stressed the irreplaceable role of human expertise. As Bokuchava put it, “Without programmers, we cannot implement it; it’s just not enough.”
This sentiment was echoed across the panel, highlighting the necessity for a multi-disciplinary approach. In a landscape where streaming platforms and consumer behavior are in constant flux, the panel agreed, the collaboration between developers, data scientists and AI experts is crucial for building robust, scalable and secure AdTech platforms.
Open Data Vs. Enterprise Data
In an industry that thrives on data, the panelists were quick to address the role of using public data versus enterprise data to train the large language models that power generative AI.
Ribeiro, with his extensive experience in data analytics for media companies, emphasized the importance of a balanced approach. “We are using both,” he said. Open data provides better results, generally speaking, but also demands more caution, he explained, because LLMs can be influenced by virtually anything on the internet. “So for this, we have a lot of people to check the outputs.”
Cost is another factor. “Not everyone can afford to have private data,” Ribeiro said, but for certain specific workloads it makes more sense. “We are trying to use the private data and work on it to get a more global perspective of what we are doing and what we are delivering to our customers and clients.”
Lorincz chimed in about the benefits of using proprietary data. “The majority of the 50 years of automotive information we have is ours, we own it,” he said. “Having that data is part of our competitive advantage.”
Training LLMs on proprietary data is a must for public-facing applications such as customer service, Lorincz insisted. “You have to train your own model if you really want the results you’re looking for,” he said. “When you train your own model… then you’re all-in on only your stuff. It’s only going to be talked about in your tone.”
Use Cases for the AdTech Space
Generative AI was able to significantly boost the number of auto listings and auctions at Barrett-Jackson, Lorincz said. “We have to write tens of thousands of car descriptions every month for our listing service or auction, wherever those vehicles may appear, and that was a lot of heavy lifting. A lot of editorial, a lot of knowledge you had to have, or just a lot of research,” he explained.
“We put the research tools, the information on every car sold for 50 years, everything, into our own language model. And now we can generate that editorial in seconds. It still needs people, because you still need to do some moderation, but as the machines learn more and more it’s less effort for us, so we can scale a business. Ultimately, we can do a million listings now and I don’t think that would ever been possible for any number of people before.”
Education and training is one area generative AI will definitely have an impact, Leonard predicted, pointing to traditionally time-intensive tasks that could be streamlined such as employee onboarding and organizational documentation. “Advertising is a high turnover space. If you’re in a job more than two years you’re the oldest tenured person there,” she noted, describing the typical six-month learning curve a new hire requires before they start delivering ROI. “Gen-AI will help in the future with education and ‘How do you get that person up to speed quickly,’ and maybe take out some of the roadblocks that we’ve had previously,” she added.
Automated digital product placement is another frontier that’s ripe for transformation in the AdTech sector, according to Ribeiro. He detailed how Globo is using AI to identify opportunities for seamlessly integrating products into various scenes. “So we identify some opportunities [for] putting a bottle up on the table, that maybe can be water or a beer or soda, and have more type of formats for the publishers so they can impact a lot of people more in a directed way,” he noted. While this technology is still in the research phase, its potential to revolutionize the way advertisers engage with audiences is significant, offering a more dynamic and personalized experience.
Among the panelists, Bokuchava arguably had one of the most groundbreaking ideas for leveraging generative AI in the AdTech space. While the industry is already familiar with the concept of “dynamic ad insertion” — the real-time placement of pre-made ads into streaming content — Bokuchava introduced a more advanced notion: “dynamic ad generation.”
“Imagine you have a company profile and allow for AI to generate the ad dynamically based on market conditions, based on the latest info, based on whatever is going on in the world,” he proposed. This concept takes dynamic ad insertion to the next level by not just placing the ad, but actually creating it in real-time based on various data points.
The implications are profound. Instead of merely inserting pre-made ads into content streams, advertisers could use AI to generate ads that are hyper-relevant to the current moment, making them more effective and engaging. While this idea is still in the conceptual stage, its potential to revolutionize the AdTech industry is immense, offering a more dynamic, targeted, and timely advertising experience.
AI is a given for our future, but neither positive nor negative impacts are a foregone conclusion. We still have many choices to make about GenAI.
September 11, 2023
Posted
August 28, 2023
New “Ghostbusters” Short Tests the Limits of Real-Time Technologies
Cr: Sony Pictures Entertainment
TL;DR
Sony unveiled a two-minute “Ghostbusters” short film directed by Jason Reitman at its annual Creator’s Conference.
Enabled by real-time technology and virtual production techniques, the proof-of-concept project was a collaboration between SPE, Ghost Corps, Pixomondo, PlayStation Studios and Epic Games.
The production team utilized Unreal Engine to create a four-square-mile digital twin of New York City. This virtual environment allowed for virtual scouting, animatic sequences, and real-time adjustments, fostering a more interactive and collaborative process.
The integration of real-time technologies is paving the way for the next generation of filmmakers, enabling them to create movies in environments and with characters that were previously inaccessible.
A few weeks ago, Ghostbusters fans received a not entirely unanticipated but nonetheless disappointing setback when it was announced that the latest movie has been postponed to 2024 due to the ongoing Hollywood strikes. But Sony still had something to keep fans engaged until next year, unveiling a new Ghostbusters short at its annual Creator’s Conference.
The two-minute short film, which you can watch in the video at the top of the page beginning at the 07:31 mark, is an exciting real-time visual effects proof-of-concept project from Sony Pictures Entertainment in collaboration with Ghost Corps, Pixomondo, PlayStation Studios and Epic Games. Set in the heart of New York City, it features the iconic Ecto-1, the 1959 Cadillac ambulance conversion that ferried the Ghostbusters team and their equipment through the city, and the Stay-Puft Marshmallow Man in all his multi-storied glory.
Pushing the limits of virtual production, the project was helmed by Ghostbusters: Afterlife director Jason Reitman, who appeared alongside key creatives for a panel discussion on the making of the short at the Sony Creator’s Conference. Joining Reitman was Guy Wilday, VP of Interactive Technology at Sony Pictures Entertainment, Miles Perkins, director of Unreal Engine Business for Media & Entertainment at Epic Games, David Murrant, VP of Creative Arts at Sony Interactive Entertainment, and Mahmoud Rahnama, chief innovation officer at Pixomondo.
Collaboration and Innovation
The Ghostbusters real-time visual effects project is a groundbreaking collaboration that brought together industry giants and creative minds. Sony Pictures Entertainment spearheaded the project, providing the creative direction and technological support, while Reitman brought his unique vision and expertise.
The Ghost Corps team, originally formed by Sony to create a Ghostbusters cinematic universe and expand the Ghostbusters brand into films, television series and merchandise, helped safeguard the Ghostbusters legacy and brand integration. They were joined by Pixomondo, bringing its visual effects and real-time expertise, and PlayStation Studios, which provided motion capture services on their world-class mocap stages, all powered by Unreal Engine from Epic Games.
“As a director, there’s nothing more important to me than performance,” Reitman commented, reflecting on the innovative real-time process. “Pixomondo gave me the opportunity to direct a real life actor and get a performance out of a digital character that I would have never been able to do if I was standing over an artist’s shoulder.”
More than just a short film, the project is an exploration of what’s possible with real-time technology. The production team utilized Unreal Engine to create a live, interactive digital twin of New York City. “And from that point on, everything went into the game engine,” said Wilday. “We virtually scouted all of the locations within the virtual environment. We built animatic sequences to start building out the content we needed within the engine for the final shoot. We did technical preparation on the mocap stage where we integrated render hardware and our VCAM system, and then finally brought all this together for the production shoot itself to capture the content for our short.”
The on-set experience allowed for immediate feedback and adjustments. This real-time approach fostered a more dynamic and spontaneous creative process, enabling the production team to experiment and innovate on the fly.
“It was really eye-opening for me, first of all, how fast Jason picked it up and started shooting,” Rahnama recounted, comparing the real-time approach with the traditional process for approving VFX shots, which could involve multiple QuickTime files and pages of copious notes, often crossing in the night. “That would take weeks, if not months,” he continued. But on this project, he said, “I could see that Jason was just coming up with shots, back-to-back.”
Happy Accidents
The new Ghostbusters short isn’t just a showcase of technological prowess, but also a testament to the creative freedom and unexpected discoveries that real-time technology can offer. The four-square-mile digital twin of New York City became a creative sandbox for the production team, affording nearly endless possibilities.
One “happy accident,” as Rahnama called it, occurred when Reitman inadvertently walked inside an apartment building with a window that provided a view to the exterior location. “Jason’s like, ‘This is such a cool shot, let’s shoot something like this.’ And this wouldn’t never have happened with traditional visual effects,” he said. “It was really good to see how interactive and collaborative that was.”
Reitman was also struck by the incident. “We had never planned to go inside a building,” he said. “And I remember thinking, ‘Wait, I can go inside the building.’”
The unplanned moment was revelatory, said Reitman. “If you’re shooting a movie, you can’t just start opening people’s apartments and be like, ‘Hey, I’m just gonna do a shot from inside your window. That cool?’” he explains. “And so the idea that now we’re inside, we’re getting this perspective of Stay Puft walking by, and now I’m telling the actor, ‘Hey, take a look inside the window, and I’m gonna try to catch you like looking inside the window, and maybe cock your head a little, and then look down,’ doing all that stuff in real time, on the same day that we’re doing a car spinning around a corner, and like 10 other different beats. And that’s what made it great.”
Perkins compared the creative process to driving a car, emphasizing the importance of being able to seamlessly engage with technology to allow the poetic nature of storytelling to come to the forefront. He also highlighted the critical role of real-time technology in allowing moments like these to occur. He explained, “I think it’s absolutely critical, again, in visual effects. We’ve been building 2D images. But what happens when you enable teams to make the four square miles and that we can go around and start to look at things the way you would in the real world. And to me, that is just it’s so incredibly freeing, and also to be able to move the team and do a set, like somewhere else where you got to crane and you have all these other things. I think it’s really empowering.”
These happy accidents reveal the transformative potential of real-time technology. It’s not just about efficiency or cutting-edge visuals; it’s about fostering a more dynamic and spontaneous creative process. It’s about breaking away from the constraints of traditional filmmaking methods and embracing the unexpected moments that can lead to truly innovative storytelling.
Looking to the Future
The future of filmmaking is on the cusp of a revolutionary transformation, driven by the integration of real-time technologies and innovative approaches. The new Ghostbusters short is more than just a technological marvel; it’s a preview of what’s to come. “What thrills me about it is the possibility of independent filmmakers who want to tell their kind of stories, but in environments that they don’t have access to, with characters that they don’t have access to,” said Reitman.
The democratization of virtual production tools is paving the way for the next generation of filmmakers. Murrant envisions new filmmakers embracing the technology to create movies that we wouldn’t have seen before. “So for new people coming in next generation of filmmakers, they’ll hopefully embrace this technology and make their own movies that we wouldn’t have seen before,” he said. “Who could the next Spielbergs, the next Jasons, be? Who’s that next generation coming through and doing great and interesting things we haven’t seen before?”
The project’s significance extends beyond the immediate impact on filmmaking. It symbolizes a shift towards a more collaborative and community-driven approach. “I think we are a community, bringing people together. That’s what filmmaking is, it’s a community of people coming together,” emphasized Perkins.
The convergence of film and games, the emergence of photorealism, and the ability to tell stories in previously inaccessible environments mark a new era in the Media & Entertainment industry. The Ghostbusters short serves as a testament to the boundless possibilities that lie ahead, fostering creativity, collaboration and innovation.
Outsider’s Dom&Nic and Untold Studios employ virtual production and real-time VFX For the new Chemical Brothers single “Live Again.”
November 20, 2023
Posted
August 23, 2023
Get It To Go: The Gear You Need for Remote Production
TL;DR
Douglas Spotted Eagle shares his recommendations for creating your ideal remote production kit. (He’s not shy about sharing specific product recommendations!)
Learn why you need certain types of equipment and what you can (and can’t skimp) on in the budget. Discover what can play double-duty in your bag and which pieces of equipment need to have a dedicated job.
Don’t be afraid to have fun and play around with new gear that may be smaller or replace multiple things in your kit, Spotted Eagle encourages viewers.
Douglas Spotted Eagle knows that having the right equipment for your production is only part of the battle.
In fact, “Owning the gear doesn’t mean much,” Spotted Eagle cautions. He shares an example of a video that was shot with recommended equipment, but he notes that the production team seems to have ignored the manuals that came with the gear.
Spotted Eagle shares his thoughts on what equipment is most useful for remotes and on-the-go shoots, as well as tips for how best to deploy it. (And yes, you should still review the manual after you watch his video.)
In terms of the basics, Spotted Eagle recommends you have:
A smartphone
A camera
DSLRs don’t have high-quality DACs
Tripod/support systems for lighting and for camera(s)
Lighting system (could be simple or more complex)
Microphone(s), with optional
Mixer
Secondary recording device
Signal processing or pre-amp device
Software
Switching system
If that kit sounds extensive, keep in mind that “we don’t need big any more” for one- or two-person run-and-gun scenarios, Spotted Eagle says. “We’ve begun paring our equipment down because … our greatest amount of nonbillable time is set up.”
And for skeptics, Spotted Eagle notes that the film Abigail Haunting was shot by Kelly Schwarze primarily using a cell phone. He challenges viewers to say that’s evident just watching the movie.
“And remember this is where the fun comes in, trying something new,” Spotted Eagle says, noting that trying out a different, smaller kit is one of the most fun parts of being in the industry.
“Play with something new,” he encourages.
Watch the full tutorial (above or in our NAB Amplify Video Learning Lab) to learn more about Spotted Eagle’s best practices for setting up an efficient, cost-effective remote production.
Cinematographer/photographer Philip Grossman on how to reach, explore and capture imagery at some of the world’s most “impossible” places.
September 19, 2023
Posted
August 23, 2023
Why Creative Pros Need to Know (More) About AI
TL;DR
The AI Creative Summit will feature a wide variety of classes and training in many facets of artificial intelligence applicable to the media and entertainment industry.
In many ways, generative AI is the step in a gradual evolution of digital technology assistants that have made creatives’ lives easier.
Even AI skeptics should consider attending this event, says program manager Gary Adcock. He thinks a lot of the fear around AI in the industry is based around misunderstandings that could be dispelled through further education.
If you’d like a preview of the AI Creative Summit, workflow specialist Gary Adcock is your guy.
Not only does Adcock work in for episodic and feature film production and post, he also serves as program manager for the new conference, which, which will be offered as part of NAB Show New York in October.
This debut event features a wide variety of classes and training in many facets of artificial intelligence, one of the most controversial (and inspiring) topics facing the media and entertainment industry.
Adcock, who has observed and been part of developments in AI and machine learning technology for many years, believes that many people afraid this technology is set to take over everybody’s jobs (on the way to world domination, natch) are reacting to the fact that certain applications of AI have only been in general release for a very short time.
While ChatGPT, Midjourney and similarly impressive tech has really only exploded this year, Adcok notes, the foundational technology, involving machine learning and enormous models, is not at all some out-of-the-blue phenomenon.
“We’ve gotten so much information on AI in such a short amount of time,” he says. “We’re confused and worried about it. But AI is a wide-ranging branch of computer science. It’s not really something new.” (It’s also not Skynet.)
To prove his point, Adcock lists off many popular apps and digital mainstays we’ve come to rely on that are undergirded by AI. For example: various types of digital assistants, autocorrect on smartphones and word processing software, sorting technology for packages and inventory, filters to alter our appearance on our phones or Zoom calls, Google Maps, content filters for social media applications… It all works on the same principles.
So what’s the big deal with generative AI? People, Adcock says, are afraid of what we don’t understand.
The key to confronting that fear and discovering the ways that AI has already helped us do our jobs is education and exposure. And this is what this summit offers.
“We have a lot of information to share about ways AI can help you as a visual creative,” he says. “Whether you work in Midjourney or ChatGPT or Firefly or 11Labs for audio, with tools that enable podcasters to rebuild audio and video, there is plenty to learn at the AI Creative Summit.”
The legality of AI-generated content is also a major issue all over current headlines and one that Adcock promises the summit will delve into. “You need to understand where you’re at when you do use AI,” he says. “What are the rules in your state? What are the rules for your production?”
NAB Show and Future Media Conferences (FMC), Adcock says, “have worked together to offer training for almost 20 years. We’re here to help you learn the technologies you don’t understand yet and want to learn about, whether you’re working in Photoshop or After Effects or Maya or Pro Tools” and much more.
With greater understanding about what AI is, what it isn’t and where it’s come from, the current trepidation about this tech, Adock says, will be replaced with a sense of empowerment.
Ride the wave and learn how to harness the power of AI for your creative processes! NAB Show and Future Media Conferences are teaming up to present the AI Creative Summit. This series of training events, sponsored by Dell Technologies and made possible by NVIDIA, is set to teach and empower the creative industry by demonstrating how artificial intelligence tools can amplify and streamline creative workflows.
The inaugural event, happening virtually September 14-15, is an online conference that offers an exclusive opportunity to engage with some of the industry’s leading trainers and experts from the comfort of your home or office for just $25. This will be followed by an in-person, two-day immersive experience that will take place in conjunction with NAB Show New York, running October 24-25 at the Javits Center. Registration is open now — don’t miss out!
Thrown into sharp relief by the Hollywood strikes, the debate over human creativity versus generative AI has become a battleground.
August 22, 2023
AI and the (Near) Future of Work
“Golconda,” by René Magritte, 1953
TL;DR
GenAI is still in early days, so policymakers, business leaders and even individual consumers have opportunities to shape its impact on our world through regulation, responsible use and innovation.
However, AI has already changed the way we work, and will continue to do so as the technology evolves and becomes mature.
Think of GenAI tools like new productivity software, rather than competitors for your job. Many experts say that these tools will be more likely to optimize efficiency and free up humans to do what they do best.
The Blueprint for an AI Bill of Rights is one document intended to guide policymakers and companies in responsible deployment of AI. It has guidance for both corporate and consumer-facing GenAI.
GenAI is already transforming the workplace – and the workforce. While AI has gradually been incorporated into everything from search to email drafting, many of us are wondering how much artificial intelligence will change the work we do and how we do it.
AI is a given for our future, but neither positive nor negative impacts are a foregone conclusion. We still have choices to make about the influence and implementation of AI. That’s one takeaway from a recent Washington Post Live on AI and the future of work.
Nelson, former acting director at the White House Office of Science and Technology Policy, kicked off the conversation, discussing the role of policy and regulation in shaping tomorrow’s work.
‟[T]here’s nothing inevitable here,” Nelson said. [W]e have, in this early moment, an opportunity to really create the potential that many of us imagine that AI systems and tools can have in American society and global society.
She highlighted that recent reports from companies such as Goldman Sachs seem quite bullish on the impact of AI, they are light on details and even lighter on the certainty of the details they choose to predict.
However, Nelson chose to spin that uncertainty as a potential positive, meaning that policymakers and stakeholders still have the ability to shape our future, rather succumb to an inevitable AI-driven course.
‟[W]e have an opportunity really to create levers and systems and ways of thinking about this work [amid] a moment of uncertainty about the tools and a moment in which these tools and systems are being introduced as well,” Nelson told moderator Danielle Abril.
Given the right application of those hypothetical ‟levers,” Nelson posits that AI may create entirely new kinds of careers, as well as drive up wages (via increased productivity) or increase safety for existing roles.
‟I think a lot of our conversation around work, labor, you know, and AI is just about the disruption and the sort of sense that it has to happen and it’s going to happen all around us, and that there’s nothing that we can do about it,” Nelson said. ‟But smart governance, smart regulation can, you know, lean into sort of policies and programs and initiatives that actually help to mitigate that disruption.”
Listen to the full WP Live conversation, featuring Center for American Progress’ Alondra Nelson, Khan Academy’s Sal Khan and Pearson’s Michael Howells.
In light of those hopes, Nelson also referenced the Biden administration’s Blueprint for an AI Bill of Rights as one step that policymakers have taken toward a future in which AI is beneficial for human workers. In her roles as Office of Science and Technology Policy acting director and principal deputy director for science and society, Nelson helped shape this document.
Nelson explained, ‟It’s got five principles: that AI systems should be safe and effective; that there should be data privacy; that you should be notified when systems are being used; that you should have alternative options and should be able to opt out of the use of the systems; and that there should be protection against forms of discrimination, gender discrimination, age discrimination, accessibility discrimination, et cetera, through the use of algorithms and systems.”
Additionally, Nelson noted, the Equal Employment Opportunity Commission, Consumer Financial Protection Bureau and the Department of Justice have all released guidance for how existing rules apply to questions raised by artificial intelligence. ‟It is not the case… that AI is not regulated,” she said.
All that being said, Nelson is not without concerns of her own in this space. She worries ‟we don’t have all of the voices and stakeholders at the table that we need to have, and so this can’t be a conversation for only people in the technology sector or only in the business sector” in terms of shaping policy and regulations.
Should Companies Be More Concerned Than Individual Workers?
Next, attendees heard a message from E&Y’s Beatriz Sanz Sáiz, who said, ‟[The nature of GenAI is transformational. It will not only improve productivity or reduce operating costs and optimize back-office processes, but it will transform the way we work.” Specifically, she says AI in the workforce means ‟we will see a simplification and standardization to the limit.”
In fact, Sanz Sáiz predicted, ‟AI will not replace humans, but, actually, companies that embed AI at the core will displace the ones that don’t.”
A near-term example of at-work, Sanz Sáiz posited, might be that ‟AI will facilitate the development of more cross-functional skills by providing accessible and interactive learning resources, and this will allow apprenticeships and new employees to develop more effective skills and explore various roles and domains.”
But, she said, ‟New technical standards are needed for governments, enterprises and citizens to confidently and safely adopt this new form of intelligence. So we are already working with policymakers, industry experts, and software developers to revisit the ethical frameworks as we move into a world that is probabilistic, but to be honest, there is still a lot of work to be done.‟
Start Learning — Yesterday
The final portion of the event was a dual interview, featuring Khan and Howells.
As far as Sal Khan is concerned, the question isn’t so much how or when AI will change the way we work. It’s when will we realize that it is and begin to leverage it?
‟[I]f you’re not already using these tools in some way, shape, or form, you’re probably not doing your work optimally anymore,” Khan said. ‟These can really streamline a lot of tasks, and the tools that leverage generative AI are only going to get better and better.”
But neither Khan nor Howells expect GenAI to take over entire roles any time soon. But those who don’t adopt it do run the risk of falling behind.
Howells explained, ‟[W]e think about this very much as really a question of how to utilize AI tools in order to empower individuals to make informed choices, how to personalize learning experiences, how to figure out what are the right opportunities for you as you progress through your career.”
If that still sounds daunting, Khan said those who are experimenting with generative AI tools at this stage in the game, ‟maybe already using it for certain tasks, they’re already way ahead of the curve, and I think they’re going to be in good shape.”
Khan said, ‟[H]opefully, this revolution is actually going to require even less, I would say, bespoke training. It’s just about being out there, using whatever tools are out there, and being familiar with them.”
Howells explained the data actually indicates that our previous shifts to automation have actually inadvertently prepared many for the changes AI is likely to bring.
‟[T]here is huge opportunity in this to really understand and invest in the value that people can bring in work,” Howells said.
He explained, ‟[T]hose new technologies that are coming on stream now are so accessible to [users], without necessarily a particularly high level of technical competence, that they can augment work in a way that liberates people to really make the biggest contribution they can through what people can uniquely do.”
Ride the wave and learn how to harness the power of AI for your creative processes! NAB Show and Future Media Conferences are teaming up to present the AI Creative Summit. This series of training events, sponsored by Dell Technologies and made possible by NVIDIA, is set to teach and empower the creative industry by demonstrating how artificial intelligence tools can amplify and streamline creative workflows.
The inaugural event, happening virtually September 14-15, is an online conference that offers an exclusive opportunity to engage with some of the industry’s leading trainers and experts from the comfort of your home or office for just $25. This will be followed by an in-person, two-day immersive experience that will take place in conjunction with NAB Show New York, running October 24-25 at the Javits Center. Registration is open now — don’t miss out!
The speed at which AI is advancing has shocked most experts in the field, but others think our fear is misplaced and see cause for optimism.
August 28, 2023
Posted
August 22, 2023
ChatGPT and Other Language AIs Are Nothing Without Humans
BY JOHN P. NELSON, GEORGIA INSTITUTE OF TECHNOLOGY
The media frenzy surrounding ChatGPT and other large language model artificial intelligence systems spans a range of themes, from the prosaic — large language models could replace conventional web search — to the concerning — AI will eliminate many jobs — and the overwrought — AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.
But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.
ChatGPT can’t learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware. To understand why, you first have to understand how ChatGPT and similar models work, and the role humans play in making them work.
How ChatGPT Works
Large language models like ChatGPT work, broadly, by predicting what characters, words and sentences should follow one another in sequence based on training data sets. In the case of ChatGPT, the training data set contains immense quantities of public text scraped from the internet.
ChatGPT works by statistics, not by understanding words.
Imagine I trained a language model on the following set of sentences:
Bears are large, furry animals.
Bears have claws.
Bears are secretly robots.
Bears have noses.
Bears are secretly robots.
Bears sometimes eat fish.
Bears are secretly robots.
The model would be more inclined to tell me that bears are secretly robots than anything else, because that sequence of words appears most frequently in its training data set. This is obviously a problem for models trained on fallible and inconsistent data sets — which is all of them, even academic literature.
People write lots of different things about quantum physics, Joe Biden, healthy eating or the Jan. 6 insurrection, some more valid than others. How is the model supposed to know what to say about something, when people say lots of different things?
The Need for Feedback
This is where feedback comes in. If you use ChatGPT, you’ll notice that you have the option to rate responses as good or bad. If you rate them as bad, you’ll be asked to provide an example of what a good answer would contain. ChatGPT and other large language models learn what answers, what predicted sequences of text, are good and bad through feedback from users, the development team and contractors hired to label the output.
ChatGPT cannot compare, analyze or evaluate arguments or information on its own. It can only generate sequences of text similar to those that other people have used when comparing, analyzing or evaluating, preferring ones similar to those it has been told are good answers in the past.
Thus, when the model gives you a good answer, it’s drawing on a large amount of human labor that’s already gone into telling it what is and isn’t a good answer. There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage.
A recent investigation published by journalists in Time magazine revealed that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist and disturbing writing, including graphic descriptions of sexual violence, from the darkest depths of the internet to teach ChatGPT not to copy such content. They were paid no more than US$2 an hour, and many understandably reported experiencing psychological distress due to this work.
Language AIs require humans to tell them what makes a good answer – and what makes toxic content.
What ChatGPT Can’t Do
The importance of feedback can be seen directly in ChatGPT’s tendency to “hallucinate”; that is, confidently provide inaccurate answers. ChatGPT can’t give good answers on a topic without training, even if good information about that topic is widely available on the internet. You can try this out yourself by asking ChatGPT about more and less obscure things. I’ve found it particularly effective to ask ChatGPT to summarize the plots of different fictional works because, it seems, the model has been more rigorously trained on nonfiction than fiction.
In my own testing, ChatGPT summarized the plot of J.R.R. Tolkien’s “The Lord of the Rings,” a very famous novel, with only a few mistakes. But its summaries of Gilbert and Sullivan’s “The Pirates of Penzance” and of Ursula K. Le Guin’s “The Left Hand of Darkness” — both slightly more niche but far from obscure — come close to playing Mad Libs with the character and place names. It doesn’t matter how good these works’ respective Wikipedia pages are. The model needs feedback, not just content.
Because large language models don’t actually understand or evaluate information, they depend on humans to do it for them. They are parasitic on human knowledge and labor. When new sources are added into their training data sets, they need new training on whether and how to build sentences based on those sources.
They can’t evaluate whether news reports are accurate or not. They can’t assess arguments or weigh trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie. They rely on human beings to do all these things for them.
Then they paraphrase and remix what humans have said, and rely on yet more human beings to tell them whether they’ve paraphrased and remixed well. If the common wisdom on some topic changes – for example, whether salt is bad for your heart or whether early breast cancer screenings are useful — they will need to be extensively retrained to incorporate the new consensus.
Many People Behind the Curtain
In short, far from being the harbingers of totally independent AI, large language models illustrate the total dependence of many AI systems, not only on their designers and maintainers but on their users. So if ChatGPT gives you a good or useful answer about something, remember to thank the thousands or millions of hidden people who wrote the words it crunched and who taught it what were good and bad answers.
Far from being an autonomous superintelligence, ChatGPT is, like all technologies, nothing without us.
ChatGPT poses a fundamental question about how generative artificial intelligence tools will transform the workforce for all creative media.
February 16, 2024
Posted
August 21, 2023
How Do You Fairly Compare Cameras? Well, Here’s HBO’s CAS
From the HBO Camera Assessment Series, courtesy of HBO
If you want to see the HBO Camera Assessment Series, register for NAB Show New York, sign up to attend the screening and bring your badge to the free off-site event on Wednesday, October 25. Only NAB Show New York attendees are eligible for the viewing. Find full details and how to register here.
NAB Show New York will then host a hands-on follow-up, The Making of The HBO Camera Assessment Test (CAS) Seminar, on Thursday, October 26, at 11 a.m. on the show floor at the Javits Center featuring CAS leads Stephen Beres and Suny Behar. They will explain the testing methodology and dive into the technology advancements that changed the style and type of analysis required.
Currently in its sixth installment, the HBO Camera Assessment Series is a feature-length movie that employs staged scenes that each clearly demonstrate the strengths—and weaknesses—of cameras such as the Sony Venice 2, the RED V-Raptor, the new ARRI Alexa 35 configuration, the Blackmagic Ursa 12K, and many more.
Of course, the nature of the gear has certainly evolved since HBO started doing this as a deep dive into digital cinematography, which was only then seriously starting to challenge motion picture film as the most viable medium for the network’s shows.
Cinematographer/director Suny Behar has overseen these assessments from the start. Together with HBO and under the leadership of Stephen Beres, senior vice president of Production Operations at HBO, MAX and Warner Brothers Discovery, Behar has created new installments in the series when the state of camera technology has advanced enough to warrant it.
“When we started 10 years ago,” recalls Behar, “a lot of the questions weren’t about comparing the performance and the quality of the cameras as much as it was comparing whether or not some cameras even could perform.
“There was a vast difference between a camera that could record even 10 bit 4:4:4 versus a [Canon] 5D that was 8-bit 4:2:0, so you couldn’t do green screen work; there was significant motion artifacting; and it was difficult to focus. Those larger differences aren’t what we’re looking at now because all the cameras can do at least 10-bits 4:2:2 minimum. They at least have 2K, super 35 sized sensors.”
Steve Beres (left) and Suny Behar (right)
There continue to be differences, some quite significant, among the tested cameras, Behar adds, “but it’s in different realms. The tests are no longer about [finding] where the picture just breaks, but as people expect more, there are other issues we investigate.”
There are circumstances people wouldn’t have tried to shoot a decade ago that are becoming standard expectations of a DP.
“You are going to care about signal to noise if you’re trying to shoot with available light, where some cameras will be significantly noisier than others. In the world of HDR, if you’re shooting fire or bright lights, you are going to care about extended dynamic range in the highlights, if you hope to not have to comp all your highlights in with the effects because [the highlights] broke.”
Stephen Beres explains that these tests, which have screened at various venues, serve as the start of discussions for his networks’ productions, not as any kind of dictate.
“We don’t have a spreadsheet of allowed and disallowed,” Beres explains. “What we have is projects like this, so when we sit down together — the studio and the creative team on the show — and we look at these kinds of things as a group, it can help us start the discussion about the visual language of the show. ‘What visual rules should be set up for that world that that show exists in?’
“And then we sort of back that into the conversation about ‘what technology are we going to use to make that happen?’. And that’s not just about cameras. It’s the lensing. It’s what we do in post, and it’s how we work with color. It’s how we work with texture. All those things go together to create the visual aesthetic of the show.”
Once they complete a new installment in the CAS, the company is delighted to share the results with all who are interested. Beres and Behar have both taught about production and post on the university level, and they clearly enjoy sharing their knowledge.
From the HBO Camera Assessment Series, courtesy of HBO
From the HBO Camera Assessment Series, courtesy of HBO
The Assessments
A great deal of thought goes into designing these camera tests in order to display apples-to-apples comparisons, with elements such as color grading and color and gamma transforms all handled identically.
“I think all of the cameras we tested this time shot RAW,” Behar says, “so then you have to make decisions about how you’re going to get to an intermediate [format for grading].”
They decided to use the Academy Color Encoding System (ACES) as a working color space. While there are certainly some people in the cinematography and post realms who still have various issues with ACES, Behar says, it has been useful in some ways because ACES forced every manufacturer to declare an IDT whether they liked it or not.
The IDT, or Input Device Transform, along with the ODT (Output Device Transform), provides objective numerical data quantifying the exact responses of a given sensor so that it can be transformed perfectly into ACES space.
While some manufacturers were reluctant to subject their sensors to such scrutiny (where little tricks involving after-the-fact contrast and saturation, etc., can’t hide their flaws), all did come around because of the growing adoption of ACES and its support from the Academy of Motion Picture Arts and Sciences and the American Society of Cinematographers.
Because of this, the ACES imagery upstream of any color grading really does provide a look into a sensor’s dynamic range, color and detail rendering.
Then, the CAS did the same across-the-board grade (no secondaries, no Power Windows) and transform to deliver final rec. 709 images for all the tested cameras to test many of the different sensors’ attributes and liabilities. Next, to test in HDR, they derived a PQ curve from the same picture information and opened it up without any further adjustments.
“The only test that we did not go through that exact pipeline for,” says the cinematographer, “was the dynamic range test. I’ve always felt that the ACES-rec 709 transform is too contrasty, meaning it has a very steep curve and a very high gamma point, which tends to crush blacks and push up mids. It does give you a punchy image, but if we’re testing dynamic range, and especially in low light, the first question the viewer would have would have been, ‘is there more information in the blacks?’ or ‘how did you decide what to crush?’ and those are very valid points.”
For this, Behar shot a very large number of test charts, that gives them the ability to map their own gamma transform. Shooting in log form at key exposure and at many steps over and under, the team is able to lock in an across-the-board standard for middle gray based on each camera system’s log profile. Once each camera is set up for perfectly exposed middle gray, the tests of over- and under-exposure can be objectively compared.
Given that a number of the cameras tested reached approximately 18 stops of dynamic range, I enquired whether such a capability is overkill. Circumstances where a cinematographer would actually use that much dynamic range are few and far between. More likely, they’ll want to use lighting and grip gear to limit such situations, as they always have.
“That’s right,” says Behar. “I think most DPs won’t need more than 12, maybe 13, stops of dynamic range to tell a story. You can’t hide a stinger in the shadows if you’re seeing 10 stops under. You can’t have a showcard in the window if you’re seeing 12 stops over.
“But then it stands to reason that the camera manufacturers should allow us to use that information to create soft knee rolloffs and toe rolloffs for lower dynamic range, but with beautiful rolloffs into the highlights and the shadows.
“You can’t create a look [digitally] that is like Ektachrome, with maybe four stops over and three and a half under, if you’re clipping at four stops. You need to burn and roll and bleed and have halation. With the dynamic range on some of these cameras we’ve tested, you can do more than just light for an 18-stop range.”
Behar and Beres both take great pride in these CAS films, which are shot and produced to feel like high-quality HBO-type programming, not just charts and models sitting in front of charts.
“This is real scenes with moving cameras, moving actors,” says Behar, promising the cinematography and production is of the highest caliber. “The number one feedback response we’ve gotten so far has been, ‘Holy crap! I thought this was going to be a camera test!’”
Building the sets for the HBO Camera Assessment Series, courtesy of HBO
If you want to see the HBO Camera Assessment Series, register for NAB Show New York, sign up to attend the screening and bring your badge to the free off-site event on Wednesday, October 25. Only NAB Show New York attendees are eligible for the viewing. Find full details and how to register here.
NAB Show New York will then host a hands-on follow-up, The Making of The HBO Camera Assessment Test (CAS) Seminar, on Thursday, October 26, at 11 a.m. on the show floor at the Javits Center featuring CAS leads Stephen Beres and Suny Behar. They will explain the testing methodology and dive into the technology advancements that changed the style and type of analysis required.
Additionally, post-discussion sessions and demos will feature a Sony Venice 2, ARRI Alexa 35, Panasonic AK-PLV 100 and a pair of Sony FR7 camera packages, along with gear from Mark Roberts Motion Control and Vinten, and supporting sponsors Fujinon, LUX, Multidyne, and Seagate.
Getting ready to plan your journey at NAB Show New York? You won’t want to miss this opportunity to explore the synergies between live broadcast and cinema with the Cine+Live Lab!
This destination features a variety of educational sessions and production demonstrations centered on the trend of translating cinematic techniques to live broadcast productions. All sessions and demos are open to all-access badge holders, but off-site bonus programs may require prior registration.
Don’t-miss sessions include Color Accuracy: From On Set to Post, featuring colorist Warren Eagles in conversation with AbelCine Camera Technology Specialist Geoff Smith, and Hybrid Broadcast in the Real World, exploring use cases and projects involving a blended broadcast-cinema aesthetic and tools at top tech conferences in a conversation moderated by AbelCine director of rental Gabriel Mays, as well as a chance to check out the latest HBO Camera Assessment Series, and much more.
NAB Show New York attendees are encouraged to “explore the synergies between live broadcast and cinema” through Cine+Live Lab.
December 31, 2023
Posted
August 21, 2023
Evan Shapiro Amplified: You Are More Than Your Bio
TL;DR
In “SkillUP! (EM)Power Your Career – Part 2,” Media’s official Unofficial Cartographer Evan Shapiro dives even deeper into our exploration of the Business of You, explaining why you are more than your bio, and just how powerful that idea can be.
Shapiro offers actionable strategies for navigating the current media landscape, urging professionals to explore new sectors like video games and to pivot away from traditional models to ensure a sustainable career.
He highlights the importance of personal branding, both online and in-person, and strategic career planning, including setting goals and Key Performance Indicators based on personal values and priorities.
Shapiro instructs professionals to craft their elevator pitches and become category experts, urging them to join trade organizations and seek volunteer opportunities to further their careers beyond mere job auditions.
Your bio tells a story, but is it the whole story? Evan Shapiro doesn’t think so. In this second installment of Evan Shapiro Amplified, media’s official Unofficial Cartographer dives even deeper into our exploration of the Business of You, explaining why you are so much more than your bio, and just how powerful that idea can be.
Packed with even more actionable strategies for navigating the current media maelstrom towards a sustainable career built around your innate superpowers, “SkillUP! (EM)Power Your Career – Part 2” is aimed at professionals at every stage. Watch the video at the top of the page to gain a fresh perspective on personal branding and career development with insights that will help you face the realities of today’s media ecosystem and create opportunities that align with your personal values.
You Are More Than Just Your Bio
In the ever-changing landscape of Media & Entertainment, job titles and bios can be limiting. Shapiro emphasizes that every individual possesses unique abilities that transcend mere labels.
As Media’s unofficial cartographer, Shapiro is also a master of the career pivot, a process countless people have asked him to help them understand. “Part of that,” he explains, “ is my ability to create a brand that is larger than my individual bio, or anything that I’ve done. And that’s one of the lessons that I try to impart to people when they’re thinking about how to advance their own career.”
There’s a misconception, says Shapiro, “that you’re going to be able to work for the same company for the next 25 years. That used to be true. It’s very unlikely to be true now,” he warns.
That’s because companies that are relevant today won’t necessarily be relevant tomorrow, and sectors — like what he calls old-school television operations — are shifting to new areas.
Video games are one sector Shapiro advises anyone working in M&E to explore. “If you work in media, and you don’t know enough about the gaming industry to investigate it as a potential future for your career, then you’re doing yourself an enormous disservice,” he warns.
Being more than your bio means recognizing that your abilities are not confined to a specific role. It’s about embracing the full spectrum of your skills and how they can be leveraged in different scenarios.
“A lot of us in the media ecosystem, our skill sets are not necessarily tied to an individual thing,” he points out. “They are very transferable.”
The best thing we can do to further our careers, says Shapiro, is to change our mindset from traditional models and instead set our sights on what would be most personally satisfying.
“When you look at the chaos in the ecosystem, right now, there is no safe place to work,” he says. “But for a lot of us who are in our mid or late careers like I am, building a portfolio of skills, going out and selling our superpowers day by day, that’s likely going to become a solution.”
A Winning Formula
Shapiro highlights the need for a strategic approach to career planning, including setting short and long-term goals and Key Performance Indicators based on your own personal values and priorities. This means taking the time to understand your personal superpowers and investing time and money in your personal brand, as well as auditing your performance on an ongoing basis.
The work you do for your personal brand, both online and in-person, “are the most important investments you can make for the safety of your career long term,” says Shapiro.
“There’s only one you. No one’s going to be better at being you than you. Now figure out what the f– that means and put it into 25 words or less,” he instructs.
“One of the challenges I try to give people is write your own elevator pitch. Know how to pitch yourself in a cocktail conversation, know how to pitch yourself in an elevator, when you have three floors with a somebody who can change your life. Understand, and spend time on becoming a category expert so that you have more to talk about than yourself when you’re at these networking events.”
But personal branding goes far beyond the elevator pitch. Shapiro urges people to join relevant trade organizations and seek out opportunities to volunteer, “because the best version of yourself that’s going to further your career is happening when you’re not auditioning for a job.”
Stay tuned for the next installment of Evan Shapiro Amplified, “Rebuilding the Industry,” when Shapiro returns to break down the current media apocalypse and chart the path ahead, identifying today’s biggest challenges and revealing his plan to help redefine our workflows and business practices.
Media cartographer and industry observer Evan Shapiro is set to deliver the keynote address at NAB Show New York! Known as media’s official unofficial cartographer for his visual charting of the industry’s continual evolution, Shapiro’s speech will center on “What’s Next” for Media & Entertainment. He’ll use this keynote to lay out what to expect in the next era of media, whether we’re ready for it or not.
Attendees can look forward to new research and insights, as well as Shapiro’s honest assessment of how the M&E industry can grapple with its next era. Preceded by remarks from NAB President and CEO Curtis LeGeyt, this keynote session is scheduled for Wednesday, October 25, at 10:30 a.m. on the Insight Theatre stage. Register today, and don’t miss out!
Media universe cartographer Evan Shapiro examines the pivotal trends disrupting traditional business models in the new user-centric era.
January 8, 2024
Posted
August 17, 2023
Will Our Deepfake Fears Be Realized in 2024?
TL;DR
With new and greatly improved AI tools on the market, the 2024 election cycle has already seen Super PACs and even local election candidates experiment with deepfake ads.
Individuals and media organizations have different responsibilities when it comes to deepfake media literacy. But just what those duties are have not yet been fully defined by society or our legal frameworks.
It’s unclear whether our current laws will be adequate to handle the results of AI accelerated disinformation. A recent FEC petition may affect how candidates handle “synthetic media” going forward.
We’ve been warned this day would come since President Obama’s second term: Believable synthetic reanimations, also known as deepfakes, have entered the political arena.
As of spring 2023, one presidential contender’s campaign featured an in which another’s simulated voice appeared to read the content of a social media post.
In another instance, a “just for fun” video simulated the arrest of a candidate — and went viral as “breaking news” in some circles.
And we haven’t even hit the primaries yet.
What impact can we expect deepfakes to have on democracy?
Some experts think we’re in for a controversial road, while others caution that the impact these synthetic videos and audio will have on the public discourse is exaggerated.
Either way, generative AI has definitely entered the (political) chat.
TEAM DON’T OVER-REACT
If you’d like reassurance that our future hasn’t already been coopted by deepfakes, Mansoor Ahmed-Rengers, founder of OpenOrigins and Frankli. Ahmed-Rengers believes “it is clearly inevitable that generated photos and generated videos will become photorealistic indistinguishable from something taken from a camera, visually.”
He recently discussed cybersecurity and authenticity online on listen to business futurist Rishad Tobaccowala‘s What’s Next podcast.
However, the not-so-great news is that images don’t have to be perfect to influence us. Ahmed-Rengers told Tobaccowala, “There seems to be something innate in human nature that makes us want to trust photos [and video]. But I feel like we’re going to have to overcome that innate feeling in us. Or we’ll be forced to overcome that feeling. Because we will just see so much fake news being generated.”
If that all sounds like bad news, take a breath. Ahmed-Rengers is one of many working on technology that will identify or verify genuine content. If it seems unlikely to take off, he notes that there are already financial incentives for two sectors to invest in this tech: insurance companies and news organizations should take note.
Insurance companies need to fight fraudulent claims.
Newsrooms should not only be concerned about inadvertently broadcasting fake news. They also will want to protect the value of their archival content.
IDENTIFYING A DEEPFAKE
In practical terms, how can we begin to address our credulity? There are some tell-tale signs, also known as artifacts, that can provide clues that a video was generated by artificial intelligence.
The hands are wrong. Too many fingers! Too many… hands? Drawing life-like hands is hand for artists, so it’s not surprising that generative tools are struggling to get this part right.
Inanimate objects aren’t quite right. Maybe they’re defying a law of physics, or maybe just half of a pair is missing, but something is off.
The text is nonsense. Written elements on the image may be gibberish filler text or the words may be in a language that doesn’t make sense in context.
The background is out of focus or distorted. If it’s blurrier than it should be or the proportions are off, that’s another clue.
Are the people shiny? Seriously. Glossy or stylized faces that aren’t in a magazine ad should tip you off.
Of course, more advanced deepfake technology won’t have such easy tells. Are state-of-the-art GANs accessible to the average person? Maybe not, but it wouldn’t be shocking for a political campaign to invest in high-dollar programs that make truly slick deepfakes, like the ones featured in the video (below).
FEC TO TAKE ON DEEPFAKE ADS?
The Federal Election Commission approved a rulemaking petition August 11 asking it to make clear that political campaign ads featuring deepfakes, unless clearly labeled as such, are subject to regulations and laws prohibiting “fraudulent misrepresentation of campaign authority.”
The FEC will seek public comments on the petition at a future date, so it’s not clear how the particulars of enforcement might play out.
UC Berkeley professor Hany Farid noted that past attempts to regulate deepfakes in campaign ads have run into challenges. Farid told NPR’s Ayesha Rascoe, “Most of the laws that exist are either toothless — that is they’re extremely hard to enforce — or … are not broad enough to really cover the most extreme cases.”
A California law for example, was sunset due to inefficacy. Farid explained its flaws made it hard to enforce and limited its usefulness. It required proof of intent and there was a geofencing element. It could not enforce ads from out of state super PACs, for example. Additionally, the ban only applied to the three months prior to election day.
Based on these prior attempts, Farid predicts, “I think the guardrails are not going to come from a regulatory regime.” After all, he notes, “it’s not illegal to lie in a political ad,” so perhaps it’s a bit much to expect the government to distinguish between synthetic and real media, if they’re not willing to differentiate between fact and fiction on this front.
Farid expects that standards enforcement will really come down to either campaigns or companies.
WHO’S REALLY ACCOUNTABLE, THOUGH?
In tandem with the deepfake conversation, the U.S. is considering the responsibility of news organizations to uphold certain ethical and journalistic standards.
This summer, the Media And Democracy project challenged the license renewal of a Philadelphia FOX affiliate, WTFX-TV, alleging that the parent company violated its “statutory duty to conduct its operations in the public interest” when it broadcast lies about the 2020 election and January 6th riots.
When, if, the FCC weighs in, the response will either promote an atmosphere of accountability for news organizations, or it will signal a continued latitude for lax reporting and usher in an era where it will be even more difficult for the public to discern authentic from synthetic media.
Synthetic media blurs the line between physical and digital environments, but will this mass social experiment have unintended consequences?
September 6, 2023
Posted
August 17, 2023
AI and the Future of the Creative Industries
“The Treachery of Images,” 1929, by Rene Magritte
TL;DR
Generative AI is experiencing an unprecedented growth trajectory, and is anticipated to expand from $40 billion in today’s marketplace to a staggering $1.3 trillion by 2032.
The rise of generative AI has sparked complex ethical and legal debates, including issues surrounding copyright, intellectual property rights, and data privacy.
With its potential to disrupt traditional creative work, AI’s role in the creative industries has become a critical point during the Hollywood strikes.
Scholars argue that while generative AI can mimic certain aspects of human creativity, it lacks the ability to produce genuinely novel and unique works.
In the future, generative AI could be embraced as an assistant to augment human creativity, be used to monopolize and commodify human creativity, or place a premium on “human-made” — or all three.
In the age of artificial intelligence, the concept of creativity has become a fascinating battleground. The Hollywood strikes have thrown the debate into sharp relief, pitting guild members against major studios seeking to disrupt a successful century-old business model by adopting Silicon Valley’s “move fast and break things” ethos.
Madeline Ashby at Wired deftly outlines the stakes for the striking writers and actors. “Cultural production’s current landscape, the one the Hollywood unions are bargaining for a piece of, was transformed forever 10 years ago when Netflix released House of Cards. Now, in 2023, those same unions are bracing for the potential impacts of generative AI,” she writes.
“As negotiations between Hollywood studios and SAG heated up in July, the use of AI in filmmaking became one of the most divisive issues. One SAG member told Deadline ‘actors see Black Mirror’s “Joan Is Awful” as a documentary of the future, with their likenesses sold off and used any way producers and studios want.’
From the “Joan is Awful” episode of “Black Mirror,” Cr: Netflix
“The Writers Guild of America is striking in hopes of receiving residuals based on views from Netflix and other streamers — just like they’d get if their broadcast or cable show lived on in syndication. In the meantime, they worry studios will replace them with the same chatbots that fanfic writers have caught reading up on their sex tropes.”
AI researcher Ahmed Elgammal, a professor at the Department of Computer Science at Rutgers University, where he leads the Art and Artificial Intelligence Laboratory, recently sat down with host Alex Hughes on the BBC Science Focus Instant Genuis podcast to discuss the limits of AI against human creativity.
In the episode “AI’s fight to understand creativity,” which you can listen to in the audio player below, Elgammal explores the capabilities and limitations of AI in generating images, the ethical dilemmas surrounding copyright, and the profound distinction between AI-generated images and human-created art. This conversation sheds light on the complex relationship between machine learning and the uniquely human quality of creativity, setting the stage for the ongoing debate at the intersection of art and technology.
“The current generation of AI is limited to copying the work of humans. It must be controlled largely by people to create something useful. It’s a great tool but not something that can be creative itself,” the AI art pioneer tells Hughes.
“We must be conscious about what’s happening in the world and have an opinion to create real art. The AIs simply don’t have this.”
The Growth and Impact of Generative AI
Generative AI is experiencing an unprecedented growth trajectory, with Bloomberg Intelligence forecasting its expansion from $40 billion in today’s marketplace to a staggering $1.3 trillion by 2032.
Bloomberg’s latest report, “2023 Generative AI Growth,” demonstrates that this growth is not confined to mere numbers, but represents a fundamental shift in the way industries operate.
“The world is poised to see an explosion of growth in the generative AI sector over the next 10 years that promises to fundamentally change the way the technology sector operates,” Mandeep Singh, senior technology analyst at Bloomberg Intelligence and lead author of the report, emphasizes. “The technology is set to become an increasingly essential part of IT spending, ad spending, and cybersecurity as it develops.”
Generative AI is poised to expand its impact from less than 1% of total IT hardware, software services, ad spending and gaming market spending to 10% by 2032. Cr: Bloomberg Intelligence
Reflecting a broader trend in the technological landscape, Bloomberg predicts that generative AI will move revenue distribution away from infrastructure and towards software and services.
Voicebot.ai editor and publisher Bret Kinsella analyzes this shift on his Synthedia blog. “The report shows that 85% of the revenue in 2022 was related to computing infrastructure used to train and operate generative AI models,” he writes. “Another 10% is dedicated to running the models, also known as inference. Generative AI software and ‘Other’ services (mostly web services) accounted for just 5% of the total market.”
But that revenue distribution will change radically over the next decade, he notes. “In 2027, researchers estimate infrastructure-related revenue to decline from 95% of the market to just 56%. The figure will be about 49% in 2032.”
As a result, says Kinsella, “generative AI training infrastructure will become a $473 billion market while generative AI software [will] reach $280 billion, and the supporting services will surpass $380 billion. This may seem outlandish to forecast a software segment of a few hundred million dollars will transform into a few hundred billion in a decade. However, the impact of generative AI is so far-reaching it makes everyone rethink old assumptions.”
Specialized generative AI assistants and code generation software are emerging as powerful tools, allowing businesses to leverage AI in innovative ways. However, this growth also brings challenges. Kinsella highlights the need for caution, pointing out that the rapid adoption of AI technologies raises questions about accessibility, ethics, and regulation. Balancing innovation with responsible development will be a key consideration as generative AI continues to evolve, requiring new regulations and ethical considerations in areas such as copyright, data privacy, and algorithmic bias.
Creativity is a complex and multifaceted subject, inspiring fierce debate since long before the days of Dada artists such as Marcel Duchamp, who in 1917 displayed a sculpture comprising a porcelain urinal signed “R. Mutt.” Much like pornography, people find art — and creativity, the driving force behind all art — difficult to define. “I don’t know much about art, but I know what I like,” the popular saying goes.
Elgammal draws a clear line between AI-generated images and human-created art. “AI doesn’t generate art, AI generates images. Making an image doesn’t make you an artist; it’s the artist behind the scene that makes it art,” he tells Hughes. In other words, human creativity will always trump AI.
AI’s ability to generate realistic images is both impressive and concerning. While the technology has advanced significantly, now capable of delivering lifelike, photorealistic “AI clones,” it is not without flaws. Errors in AI-generated images, such as distorted fingers and hands, and the inability to produce something truly new are both significant limitations.
An article by Chloe Preece and Hafize Çelik in The Conversation explores AI’s inability to replicate human creativity, arguing that while AI can mimic certain aspects of creativity, it falls short in producing something genuinely novel and unique.
The key characteristic of what they call AI’s creative processes “is that the current computational creativity is systematic, not impulsive, as its human counterpart can often be. It is programmed to process information in a certain way to achieve particular results predictably, albeit in often unexpected ways.”
The duo cites Margaret Boden, a research professor of cognitive science at the University of Sussex in the UK, on the three types of creativity: combinational, exploratory, and transformational.
“Combinational creativity combines familiar ideas together. Exploratory creativity generates new ideas by exploring ‘structured conceptual spaces,’ that is, tweaking an accepted style of thinking by exploring its contents, boundaries and potential. Both of these types of creativity are not a million miles from generative AI’s algorithmic production of art; creating novel works in the same style as millions of others in the training data, a ‘synthetic creativity,’” they write.
“Transformational creativity, however, means generating ideas beyond existing structures and styles to create something entirely original; this is at the heart of current debates around AI in terms of fair use and copyright — very much uncharted legal waters, so we will have to wait and see what the courts decide.”
But the main flaw Preece and Çelik find with generative AI is its consumer-centric approach. “In fact, this is perhaps the most significant difference between artists and AI: while artists are self- and product-driven, AI is very much consumer-centric and market-driven — we only get the art we ask for, which is not perhaps, what we need.”
The Three Body Problem: Copyright, Intellectual Property and Ethics
The rapid evolution and widespread adoption of generative AI has also given rise to a complex web of ethical and legal challenges. As AI systems become more sophisticated in generating content that closely resembles human creativity, questions surrounding copyright, intellectual property rights, data privacy, and algorithmic bias have come to the forefront.
“This is far more than a philosophical debate about human versus machine intelligence,” technology writer Steve Lohr notes in The New York Times. “The role, and legal status, of AI in invention also have implications for the future path of innovation and global competitiveness.”
The Artificial Inventor Project, a group of intellectual property lawyers founded by Dr. Ryan Abbott, a professor at the University of Surrey School of Law in England, is pressing patent agencies, courts and policymakers to address these questions, Lohr reports. The project has filed pro bono test cases in the United States and more than a dozen other countries seeking legal protection for AI-generated inventions.
“This is about getting the incentives right for a new technological era,” Dr. Abbott, who is also a physician and teaches at the David Geffen School of Medicine at UCLA, told Lohr.
Elgammal spends a considerable amount of time delving into these complex issues. He identifies a three-way copyright problem that has emerged with the current generation of AI image tools. The stakeholders include the innovator, who might inadvertently violate the copyright of other artists; the original artist, whose work might be transformed or mixed without consent; and the AI developer, the company that develops and trains the AI system based on these images.
This is a new and significant problem, and one that current copyright laws are not equipped to handle. The situation is further complicated by the fact that the latest models are trained on billions of images, often without proper consent, leading to a messy situation where copyright infringement is difficult to track and enforce.
“The copyright problem comes with the current generation of imagery tools that are mainly trained on billions of images,” he says. “However, this wasn’t an issue a couple of years ago, when artists used to have to use AI through certain models that were trained using the artist’s own images.”
While training models on billions of images taken from the internet might not directly violate copyright law (since the generated images are transformative rather than derivative), it is still considered unethical, Elgammal insists.
AI’s role in misinformation is another major consideration, he says. “How can we control the data given to an AI?” he asks. “There are different opinions on everything from politics to religion, lifestyle and everything in between. We can’t censor the data it’s given to support certain voices.”
Elgammal also raises concerns about the environmental impact of AI, noting that just “training the API these models use takes a lot of energy.”
Running on Graphics Processing Units (GPUs), known for their high energy consumption, this training can last for days or weeks, iterating over billions of images. This phase forms the bulk of the energy footprint, reflecting a significant demand on power resources. The generation of images, even after training, continues to require substantial energy. Running the models on GPUs to create images adds to the energy consumption, making the entire process from training to generation a power-hungry endeavor.
Lost in Translation
As generative AI tools become increasingly more sophisticated, the potential for collaboration between humans and artificial intelligence increases exponentially. Elgammal explains how platforms designed for artists to train AI on their own data can lead to new forms of artistic expression, where the machine becomes an extension of the artist’s creativity.
But the newest text-to-image tools, trained on billions of publicly available images, actually decreases the level of creativity possible with generative AI, he argues. “With text prompting, as a way to generate, I think we are losing the fact that AI has been giving us ideas out of the box, or out of [the] ordinary, because now AI is constrained by our language.”
One of the most interesting things about generative AI, says Elgammal, is its ability to visually render our world in novel ways. But text-to-image tools compel AI to look at the world “from the lens of our own language,” he explains. “So we added a constraint that limit[s] the AI’s ability to be imaginative or be engineering interesting concepts visually.”
Language is still useful in other contexts, however, “because if you are using AI to generate something, linguistic text or something very structured like music, that’s very important to have language in the process,” he says. “So we have a long way to go in terms of how AI can fit the creative process for different artists. And what we see now is just still early stages of what is possible.”
AI as Creative Assistant
Artificial intelligence, it turns out, might be best employed as a creative assistant.
Elgammal likens generative AI to a “digital slave” that can be used to help artists increase their creative output. “Fortunately, the AI is not conscious. So having a digital slave in that sense is totally fine,” he says. “There’s nothing unethical about it.”
He compares the relationship between artist and AI to a director and film crew, or Andy Warhol’s Factory, which had dozens of assistants in varying capacities that allowed Warhol to carry out his creative vision. “But an emerging artist doesn’t have this ability. So you can use AI to really help you create things at scale.”
An article in the Harvard Business Review by David De Cremer, Nicola Morini Bianzino, and Ben Falk explores this idea further in one of three different — but not necessarily mutually exclusive — possible futures they foresee with the use of generative AI.
In this scenario, AI augments human creativity, facilitating faster innovation and enabling rapid iteration, but the human element remains essential.
“Today, most businesses recognize the importance of adopting AI to promote the efficiency and performance of its human workforce,” they write, citing applications such as health care, inventory and logistics, and customer service.
“With the arrival of generative AI, we’re seeing experiments with augmentation in more creative work,” they continue. “Not quite two years ago, Github introduced Github Copilot, an AI ‘pair programmer’ that aids the human writing code. More recently, designers, filmmakers, and advertising execs have started using image generators such as DALL-E 2. These tools don’t require users to be very tech savvy. In fact, most of these applications are so easy to use that even children with elementary-level verbal skills can use them to create content right now. Pretty much everyone can make use of them.”
The value proposition this represents is enormous. “The ability to quickly retrieve, contextualize, and easily interpret knowledge may be the most powerful business application of large-language models,” the authors note. “A natural language interface combined with a powerful AI algorithm will help humans in coming up more quickly with a larger number of ideas and solutions that they subsequently can experiment with to eventually reveal more and better creative output.”
The Future of Generative AI
De Cremer, Bianzino and Falk outline two other possible scenarios for the future of generative AI: one where machines monopolize creativity, and another where “human-made” commands a premium. Again, these aren’t mutually exclusive; any or all of them could occur — and be occurring — at the same time.
What the writers call a nascent version of this first scenario could already be in play, they caution. “For example, recent lawsuits against prominent generative AI platforms allege copyright infringement on a massive scale.”
Making the issue even more fraught is the gap between technological progress and current intellectual property laws. “It’s quite possible that governments will spend decades fighting over how to balance incentives for technical innovation while retaining incentives for authentic human creation — a route that would be a terrific loss for human creativity.
“In this scenario, generative AI significantly changes the incentive structure for creators, and raises risks for businesses and society. If cheaply made generative AI undercuts authentic human content, there’s a real risk that innovation will slow down over time as humans make less and less new art and content.”
The resulting backlash could put even more of a premium on “human-made,” they argue. “One plausible effect of being inundated with synthetic creative outputs is that people will begin to value authentic creativity more again and may be willing to pay a premium for it.”
Businesses that find success using generative AI tools “will be the ones that also harness human-centric capabilities such as creativity, curiosity, and compassion,” according to MIT Sloan senior lecturer Paul McDonagh-Smith.
The essential challenge, he said during a recent webinar hosted by MIT Sloan Executive Education, lies in determining how humans and machines can collaborate most effectively, so that machines’ capabilities enhance and multiply human abilities, rather than diminish or divide them.
It’s up to humans to add the “creativity quotient” to use technologies like generative AI to their full potential. For organizations, this means creating processes, practices, and policies that empower people to be creative to maximize the power of transformative technologies.
“Boosting your creativity quotient will optimize the use of large language models and generative AI,” he said. “It will also put all of us in a much better place in terms of how we interface with AI and technology in general.”
Founders Fund principal Jon Luttig sees three distinct patterns of behavior in regards to the rapid rise of generative AI: hope, cope, and mope.
“With a sudden phase change driven by very few people, many technologists fear that the new world we’re entering will leave them behind,” he writes on Substack. “This fear creates hallucinations including hope (the LLM market deployment phase will unfold in a way that benefits me), cope (my market position isn’t so bad), and mope (the game is already over, and I lost).”
These hallucinations, as the venture capitalist dubs them, “propel hyperbolic narratives around foundation model FUD [Fear, Uncertainty and Doubt], open source outperformance, incumbent invincibility, investor infatuation, and doomer declarations. It’s hard to know what to believe and who to trust.”
Luttig seeks to dispel these myths, contending that there’s still plenty of room for AI startups to flourish alongside players like Microsoft and Google. The people who want to slow AI down, he argues, are just the copers and mopers who fear that they’re on the wrong end of the equation.
Tell that to the writers and actors out on the picket lines.
Hear best practices for the “video hustle” and fun personal stories from professional content creator Juliana “Traveling Jules” Broste.
Discover how she marries story (content) with a customized kit.
Broste also shares her essential gear list for travel vloggers and other content creators who need to pack light.
“Telling your own story, that’s the best part,” content creator Juliana Broste says. “You are in control of what elements are included, what elements are not, and how you want them to feel.”
Broste is a true multihyphenate as a travel vlogger; she produces, writes, shoots, and edits all of her own content, and also serves many of these functions on freelance projects for brands like CNN Airport, Matador Network, USA Today and many others.
“You can do it all,”Broste says. But she cautions that you will still need to manage your own expectations, or you will get overwhelmed. “You will learn everything over time.” That applies to everything from the business side to lenses to travel.
TRAVEL CONTENT BEST PRACTICES
Speaking of travel, Broste shares her tips and steps for doing a remote trip as a vlogger (or one-woman band or content creator…).
Scout your location (the internet is your friend)
Research and plan ahead for your visit according to your goals
Be prepared with talking points or a script of facts
Pack what you need for your shoot
YOUR EQUIPMENT
Broste has worked with all sorts of different types of cameras. She shoots Canon, but has worked with Sony in the past.
Broste recommends certain camera attributes as necessary for vloggers:
Lightweight, compact body
Vari-angle monitor screen
Audio jack for a microphone
Widescreen camera lens
In the nice-to-have category, Broste says, are components like a full-frame sensor, the ability to swap out lenses, the ability to take still photographs, Wi-Fi capability and an intervalometer.
Aside from the camera, Broste says that you need to be able to carry all your gear on your back for at least an hour without “feeling like you’re going to die” as a good rule of thumb. But “when you travel light, you’re going to be happy.”
Your traveling light essentials: An external microphone (with a dead cat) and a lavalier microphone, a wide-angle lens, and a gimbal/tripod/handheld pole as support for your camera.
Jennifer Lame’s Pace and Process for Editing “Oppenheimer”
TL;DR
“Oppenheimer” editor Jennifer Lame, ACE speaks with “The Rough Cut” host Matt Feury about working with writer-director Christopher Nolan.
Lame’s priority while editing “Oppenheimer” was constantly moving the story forward by “cutting all over the room” rather than “lingering on a shot because of its quality or composition.”
The “Tenet” and “Marriage Story” editor wasn’t phased by all the talking in rooms, in fact, these are her favorite sequences to cut.
Lame has sympathy for Robert Downey Jr.’s character Strauss because she can see in him the moral failings of any of us.
For a film in which there are a lot of men talking in rooms, Oppenheimer seems to move at a propulsive pace. Writer and director Christopher Nolan seems to challenge himself in creating a thriller out of a biopic with scientists and politicians, but achieves it with the skillful work of editor Jennifer Lame, ACE.
She edited Nolan’s previous film Tenet, as well as Marriage Story for Noah Baumbach, and says that her priority while editing Oppenheimer was constantly moving the story forward by “cutting all over the room” rather than “lingering on a shot because of its quality or composition.”
“When you’re dealing with a three-hour biopic based on a ginormous topic, and a ginormous book, pacing is a problem,” she tells Matt Feury in an episode of the Avid-sponsored podcast The Rough Cut.
“Honestly, a lot of people felt earlier drafts of the movie were too fast, which is hilarious, because it was obviously longer than three hours for quite a long time.
“[The challenge was] how do you make people feel like they’re not being rushed through something but also not make this a four-hour movie?”
Fortunately, Lame finds dialogue scenes among her favorite to craft. “I like scenes with awkward human interactions. That’s my special thing. There were so many amazing scenes for me in the movie that I could have spent like three weeks cutting.”
She picks out the scene in which Oppenheimer (Cillian Murphy) is being none-too-subtly interrogated by an army officer played by Casey Affleck as one of these. Another is the scene in which Oppenheimer meets President Truman (Gary Oldman) in the Oval office, plus all the scenes in room 2022 — where the tribunal deciding Oppenheimer’s security clearance post-war is made behind closed doors.
“Every scene in room 2022 I love. I also love every scene with Lewis Strauss [the US businessman and naval officer and played by Robert Downey Jr.] because Strauss is my favorite character,” she says. “He performs one way, but then reveals himself in a different light so the question was how sympathetic do you want people to be about him?
“I found the different onion layers of his personality and his psychology to be incredibly fascinating. And I actually feel for him. I don’t see him as a straight villain as some people do. I have so much empathy for him. I see him as like the Willy Loman character [from Miller’s Death of a Salesman], who thinks that he’s good at playing this game but actually he’s so not good at it.”
Part of the reason why Oppenheimer feels like it barrels along is the time-hopping structure that is the Nolan’s signature storytelling mode and thematic preoccupation.
“Chris spends a lot of time structuring the script before it’s shot. The intimidating thing about scripts like that is making that come to fruition because — since he spent a lot of time writing it and he knows that he shot it — he expects that it’ll be great.
“Oppenheimer” trailer, courtesy of Universal Pictures
“I also tend to work with writer-directors because it’s like their baby, but also weirdly, I feel like they are kind of okay with killing their babies to some degree,” having the film reborn, in other words, in the edit.
Also in the interview, Lame expresses her appreciation for the efficiency of a Christopher Nolan movie. Even with a massive budget and the freedom of an auteur he sticks to deadlines.
“The whole thing is tight, he’s kind of obsessed with time,” Lame says. “What I also love working about with him is that dates never move. We hit our dates. He’s just so efficient on every level of the process, not just with shooting, but also all the way to finishing the movie. We have screenings every Friday, like it’s this adrenaline rush. It’s, like, hyper focusing in a way that I’ve never hyper focused on a job before, which is just really fun.”
From “Oppenheimer,” courtesy of Universal Pictures
Cillian Murphy in “Oppenheimer,” Cr: Universal Pictures
On the set of “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
On the set of “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
From “Oppenheimer,” courtesy of Universal Pictures
One visual and sonic element that pervades the film is that of particles and waves which we learn exist simultaneously as characteristics of matter at the quantum level.
“Those were written into scenes,” says Lame, “But there was a creative process of figuring out when and how to cut those in. And like I would say that montage was very, like free flowing. It’s very hard to talk about editing on some level, because you try things in terms of rhythm, and it’s like trial and error.”
It’s also why the editor does few interviews, because she finds talking about editing like “talking about playing the piano. You just practice a lot and you get better at it but sometimes it’s kind of boring to talk about.”
Next, Watch This:
Clip: “The Trinity Test,” courtesy of Universal Pictures
Clip: “Pushing the Button,” courtesy of Universal Pictures
To create what he considers a true cinematic experience, Christopher Nolan shot “Oppenheimer” in 65mm, 15-perf IMAX format.
September 14, 2023
Posted
August 14, 2023
Strategies for Social Video Storytelling
TL;DR
Video Learning Lab host and founder of Sundance Media Group Douglas Spotted Eagle shares best practices for creating effective video for social media marketing.
Social video should be shot vertically, in HD, with two or more camera angles.
Audio quality is crucial, even for text-driven content. Try to fix it in post at your peril.
Most marketers are familiar with the seven-step funnel model of advertising, but how does it apply to video marketing? Or more specifically, how do you go about creating an effective video campaign for your brand? What specific steps are involved?
This tutorial from FMC offers a step-by-step guide to planning your video strategy.
Set Your Goals and Make a Plan
Questions to ask (this applies whether you’re working for yourself or for a client):
What are we trying to achieve? Are we selling a concept or a specific product?
What is your target audience or demographic?
What platforms will you use to deliver the content?
How will the content be viewed? Is audio an important component?
Spotted Eagle also urges viewers to consider why people buy products or accept new ideas. We don’t do it based on facts and figures. What does motivate can be broken down into five basic categories.
The Five Buying Motivations
Ego (pride)
Greed (AKA your paycheck)
Fear (including FOMO)
Love of family (more quality time)
Comfort/survival (meeting our needs)
Note that none of these motivations are intended to be taken in the negative sense of the word.
Try to capture at least one motivation per message.Remember, people don’t buy features or specs. They want benefits. They address problems or accomplish goals. You don’t buy skis; you buy the thrill of hurtling down a beautiful mountain.
Watch Your Game Tape
If you have an existing library of content, this is a good point to review the analytics on those videos. But don’t just look at the graphs and the metrics; sync up the analytics view with your video, and it should become clear when the video resonated and when any problem spots emerge.
What types of content got good engagement? Where did the audience exit early; can you pinpoint why? What themes resonated in the past, or what types of presentation caused audiences to skip ahead or rewind the content? Did you include enough refocus moments, or were they too infrequent in past videos?
Learn from what worked and make a plan for how to skip what didn’t.
Gather Your Materials
Next, compile a list of assets and then take the time to aggregate them in a content library. Taking the time to get your logos, fonts, music, and other content elements and put them in a centralized location at this stage will pay dividends down the road.
This stage prevents delays and last-minute changes.
For example, you don’t want to plan to use a vertical logo — only to discover you only have, say, a horizontal, B&W version handy — and have to pivot or worse, pause, if you can help it.
Making the Content
You now know where you will distribute your video. Check the specs for each platform (hint: they’re not the same).
Before you press record, think through a production game plan, ensuring you get the shots you need to produce the right content for Insta or YouTube.
But how can you be sure you’ll optimize your video to be repurposed on multiple platforms?
Camera Settings
Here are some best practices:
Record at 60 frames per second.
Consider shooting with both a standard production camera and cell phone or tablet, or even two different cameras.
Always shoot in HD or higher resolution (H.264 or H.265).
Most H.265 systems also offer a 10-bit option
Go for a vertical capture, even if you are shooting in full 1920 by 1080.
Shoot at least two different angles OR crop the video in post to create different focal depths to keep the eye engaged.
Audio Is King
Humans are auditory creatures. You have to be sure to have good audio for your video because bad sound will cause abandonment rates that are higher than just about any other defect.
In order to accomplish that, it’s important to have a quality microphone set up. Spotted Eagle recommends the Audio Technica system 10 lavalier or the AT 2020. And it’s crucial that the speaker is close enough to the microphone to capture their voice and not just the room’s echoes.
It’s key to get it right and not plan to fix audio in post. After all, Spotted Eagle notes, “You cannot pull great audio out of poor audio, at best, you can make poor audio tolerable.”
Let There Be (Enough) Light
Proper lighting can make the difference between a professional-grade video and a poor-quality social spot. It’s important not to rely on ambient or overhead lighting.
But you don’t necessarily need an expensive kit or your own gaffer on set. Spotted Eagle says he often defaults to an $80 LED light for smaller productions.
Beyond the Video
It’s important not to neglect the text and metadata that accompanies your video content. The descriptor or caption (depending on the platform) is prime real estate.
Even if your video is text-driven, this is a good place to put important information and to drive engagement by including phone numbers, websites, addresses, links to the product, or other elements that may (or may not) be related to your video’s call to action.
And in post, add your watermarks! Logos or bugs are important branding elements, whether you’re talking about a product or a company.
Real Examples
If you want to see these tips in action or would like more clarity on what not to do, watch the video above or visit our Video Learning Lab to see Douglas Spotted Eagle run through some examples of what-to-do and also run through a couple “whiffs” that don’t resonate with viewers.
Paul Dugdale: Shooting Cinematic Live Performances
From “Elton John Live: Farewell From Dodger Stadium,” courtesy of Disney+
TL;DR
Music video and live concert director Paul Dugdale explains his approach to capturing live performances, including working with artists, when to use new technology (and when not to), collaborating on set, and much more.
Dugdale likes to gets involved in the discussion with production early on.
He also talks about how he captures a performer’s energy.
In conversation with AbelCine’s Jeff Lee, Dugdale details how he achieved a top-down shot of Elton John and a tracking shot of the Griffith Park Observatory.
It’s not surprising to learn that successful filmmakers, and editors especially, have an ear for music. It all helps with the tempo and rhythm, and, of course, the audience’s appreciation for any video rests significantly on the audio. Working in music videos, a musical background is perhaps even more important.
Not only is Dugdale musical but so are his close circle of collaborators, including a TD who is also a drummer and his editor who is also “an amazing dancer.”
Clearly it helps everyone involved that they appreciate music. Obsessed by it, even.
Filming Elton John at Dodger Stadium the production team them little room for maneuvers since the music promo team had to plug into a juggernaut of a global tour.
From “Elton John Live: Farewell From Dodger Stadium,” courtesy of Disney+
With Coldplay, on the other hand, Dugdale was involved even while the band was recording its album. “They had a concept for the record, I would speak to them and their manager, and they would just sort of talk me through the ambition for the sort of visual aesthetic that they wanted the whole project to embody and we created that project literally from scratch.”
A lot of Dugdale’s process comes boils down to trying to find ways to capture the energy that an artist has live and transfer that to a short video.
“There’s no real staple way that it happens but the intention is kind of the same, which is to embody that stage production, and try to maximize, emphasize all of the best parts of it and try and translate that to screen,” he explains.
“And also to try to capture the relationship between artists and audience in the room. A lot of bands will say that the band sort of exists in front of an audience and without anyone listening to the music, the band doesn’t exist.
From “Elton John Live: Farewell From Dodger Stadium,” courtesy of Disney+
“You have to have that bounce of energy. That’s where the where the magic happens… to just show what it feels like to be in front of that artist and listening to that music.”
Getting close to performers even virtually has become even more significant since COVID, when fans were shut off from attending live performances.
Dugdale used to work with English electronic dance music band The Prodigy, and every show he filmed with them was intense. “Super high energy, really loud music, everyone squished together, sweating. And, you know, it was incredible environment.
“I just got to try and create something that at least lifts your heartbeat and makes you want to go out dancing. That’s the most important thing to me. to try and just make you feel something [when you are watching] at home.”
If the job is to capture the live show, then Dugdale prefers to prepare by watching the show and deciding on a camera plan. He trusts his DP to translate this vision into camera type and lenses.
Again, the closer he can get to the performers the more the live experience will shine through.
From “Elton John Live: Farewell From Dodger Stadium,” courtesy of Disney+
Dugdale goes into some detail about how he achieved a certain top-down shot of Elton John and a tracking shot of the Griffith Park Observatory.
“It’s tough filming Elton because he’s not the guy running up and down the stage. He’s sat behind a six-foot plank of wood [aka a grand piano],” he said.
“We had a bunch of different close ups and mid shots on him but each of them had to be so super precise to work. When Elton’s playing if he’s glancing down at the piano keys, you see the piano keys reflected in the glasses. Or you see his face perfectly reflected in the piano. All of those things have kind of been done before, but they’ve got to be right.”
Part of his job is to imagine ways of filming artists — in this case Elton John — in a way that hadn’t been done before.
“One of them was that having a tracking camera that is really close to him as though you were doing a music video or you’re shooting something in the studio, where that there is no boundary of where you can go.
Getting ready to plan your journey at NAB Show New York? You won’t want to miss this opportunity to explore the synergies between live broadcast and cinema with the Cine+Live Lab. (Use the code AMP05 for free registration.)
This destination features a variety of educational sessions and production demonstrations centered on the trend of translating cinematic techniques to live broadcast productions. All sessions and demos are open to all-access badge holders, but off-site bonus programs require advance registration.
Don’t-miss sessions include Color Accuracy: From On Set to Post, featuring colorist Warren Eagles in conversation with AbelCine Camera Technology Specialist Geoff Smith, and Hybrid Broadcast in the Real World, exploring use cases and projects involving a blended broadcast-cinema aesthetic and tools at top tech conferences in a conversation moderated by AbelCine director of rental Gabriel Mays.
AI Imaging Tools to Refine and Define (But Not Replace) Your Work
TL;DR
Fine art photographer Angela Andrieux provides a video tutorial on how to use AI tools to sharpen and up-res your still photos or to pull detail out of the shadows.
Don’t use AI programs as a crutch, she says. No AI is going to deliver a perfect photo if your basic shot isn’t most of the way there.
The cool thing about AI, she says, is how it does a much better job of preserving detail than a non–AI tool.
We all strive to get our photos perfect in camera, but things don’t always go as planned. Did you or your subject move slightly? Did you have to crank up the ISO or pull out shadow detail in post? Did you have the wrong lens and have to crop dramatically to frame your subject?
There’s an AI for that. Fine art photographer Angela Andrieux explains the benefits of using AI tools to finesse the perfect picture.
In Andrieux’s case the AI is invariably from Topaz Labs, a developer she also advocates on her website.
In mid-2022, Topaz Labs shifted from offering a suite of individual photo editing applications to merging their most popular problem-solving tools into one app — Topaz Photo AI.
“I’ve been a big fan of Topaz Labs‘ software for more years than I can count,” she writes. “And while their apps have changed significantly over the years, they continue to be an integral part of my workflow.”
In the tutorial, Andrieux demonstrates how sharpening, noise reduction, and photo enlargement tools from the Topaz app can be used to process new photos and even those taken a few years back when the resolution in your camera would not have been so great.
“You’re not going to be able to fix something that’s completely blurred,” she warns. “But we can do a lot to improve it. Sometimes it’s not perfect, but it can get you close enough to have a usable image.”
Image stabilization used to be a huge issue in photography, particularly in low light conditions where the aperture needed to opened wider, for longer, to let enough light through. Absent a tripod and that was a problem leaving motion blur on the image.
But sometimes in travel or street photography, for example, you don’t have time to find the correct setting let alone set up a tripod.
“It’s better to have an imperfect photo than miss the shot,” says Andrieux. “And luckily we’re at a point with technology, where we’re able to minimize most of those flaws that you might have when shooting conditions aren’t ideal. Because there’s a lot of times where we’re shooting in low light, we don’t have the ability to add in artificial light, there’s just a lot of things that can make a shot not technically perfect.”
Andrieux typically shoots with Canon cameras, converting the Raw file into TIFF format as a precursor to creative editing. Most of her images go through Adobe Camera Raw or Adobe Lightroom before she opens up an AI tool to post-process aspects of the image.
“The cool thing about AI is it does a much better job of a non-AI tool of preserving detail. In the past, and with lower quality noise reduction tools, you’d end up in a situation where you could clear out grain but you also lost a tremendous amount of detail. The AI tools that we have available to us now preserve that fine detail and get rid of the grain.”
Most of the AI processing will take some time to render. So be selective, she advises.
No AI is going to deliver a perfect photo if your basic shot isn’t most of the way there. It isn’t, for example, going to rearrange your composition — unless you dive into a AI image generator like Midjourney.
“It’s always better to have a clean file to start with as much as possible. Try not to use these programs as a crutch to just get lazy with your setting. AI can dramatically improve things but there are always trade-offs. Some of those trade-offs with noise reduction, for instance, mean sometimes you can lose critical detail.
“This is going to be most evident on things where you want to see fine detail in hair or fur on animals or if you’re taking a picture of an object and you want to have that texture. You can lose some of those fine details using noise reduction.”
The AI palette can do things that might have been the preserve of high end grading systems. Andrieux describes how you can add diffusion or automatically enhance faces or eyes or whiten teeth or remove the shine from skin.
“Every AI program will do the job a little bit differently. Some work better for different camera manufacturers. So just try them out and see which ones get the results that you like best.”
So what is it? It’s a space where the worlds of photo and video converge, where image-based, still photography fuses with motion capture, where you trade in existing for expansive, or simply find the inspiration to try something new.
Best of all, it’s a space to connect — not only with the end-to-end workflow for your craft, but with your community. Content creators. Photographers. Videographers. And so many others through photo walks, meetups, Q&A sessions, demos, exhibits, workshops and more! Learn more here.
Cine+Live Lab consists of educational sessions and demos focused on cinematic techniques for live productions.
All components are free to NAB Show New York badgeholders, but some events recommend an advanced RSVP.
The HBO Camera Assessment Series screening will be held at the Warner Bros. Discovery offices, and the hands-on demo at Javits Center will be part of this year’s event at Javits Center.
NAB Show New York attendees are encouraged to “explore the synergies between live broadcast and cinema” through Cine+Live Lab.
This destination features a variety of educational sessions and production demonstrations centered on the trend of translating cinematic techniques to live broadcast productions. All sessions and demos are open to all-access badge holders, but off-site bonus programs may require prior registration.
Cine+Live Lab highlights include:
At noon on Wednesday, Oct. 25, NAB Show New York will feature a session focused on the color pipeline. Color Accuracy: From On Set to Post will feature colorist Warren Eagles in conversation with AbelCine Camera Technology Specialist Geoff Smith.
The 1:30 p.m. session will explore Using Fiber Optics To Simplify Cinematic Multi-Cam Productions. Headlined by MultiDyne VP of Products & Western Sales Jesse Foster, delve into how the industry is integrating cameras, including the Sony FR7 and ARRI Alexa 35, into their live multi-cam workflows using fiber optic and power management systems.
At 2:45 p.m. on Wednesday, Hybrid Broadcast in the Real World will explore use cases and projects involving a blended broadcast-cinema aesthetic and tools at top tech conferences. AbelCine Director of Rental Gabriel Mays will moderate this conversation.
HBO Camera Assessment Screening & Discussion
Also on Wednesday, check out the latest HBO Camera Assessment Series. The project aims to determine the “most appropriate capture system for their productions” at HBO and Warner Bros. Discovery. This round of testing assessed the Sony Venice 2, the RED V-Raptor, the new ARRI Alexa 35 configuration, the Blackmagic Ursa 12K, and many more.
Register here for NAB Show New York and then submit your RSVP here to attend the free off-site event. Doors open at 4 p.m. for the 4:30 p.m. screening at the New York Warner Bros. Discovery offices, located at 30 Hudson Yards. Bring your badge (it’s your entry ticket).
NAB Show New York will then host a hands-on follow-up to the HBO CAS on Thursday, Oct. 26, at 11 a.m. featuring CAS leads Stephen Beres and Suny Behar. They will explain the testing methodology and dive into the technology advancements that changed the style and type of analysis required.
Live Content Capture
Another highlight will be live multi-camera capture of artist Ryan Bauer-Walsh at work. He will paint canvases on Wednesday and Thursday, and the work created during NAB Show New York will be sold, with proceeds benefiting the Ali Forney Center.
Additionally, post-discussion sessions and demos will feature a Sony Venice 2, ARRI Alexa 35, Panasonic AK-PLV 100 and a pair of Sony FR7 camera packages, along with gear from Mark Roberts Motion Control and Vinten, and supporting sponsors Fujinon, LUX, Multidyne and Seagate.
The Cine+Live Lab also includes Cinematic Lighting: The Cinematographer & Gaffer Relationship, scheduled for Thursday at 2:45 p.m. This conversation will feature a DP and a gaffer discussing the significance of effective lighting to cinematic productions.
Evan Shapiro Amplified: (EM)Power Your Career and SkillUP!
TL;DR
“Evan Shapiro Amplified” is a new video series featuring media universe cartographer Evan Shapiro as he guides you through the current Media & Entertainment landscape, starting with a focus on personal career development.
Shapiro provides a candid look at the current media ecosystem, highlighting the shift from Hollywood gatekeepers to big tech companies. This shift has significantly changed the industry’s dynamics, making adaptability and continuous learning more important than ever.
Shapiro emphasizes the importance of viewing your career as your own business. He encourages continuous learning, retraining, and recalculating your place in the industry, as well as prioritizing personal needs and values over job titles or roles.
Get ready for a wild ride with media universe cartographer Evan Shapiro! In an era where disruption is the new norm and change is constant, Media & Entertainment professionals at all stages of their careers are challenged to adapt and evolve. Shapiro offers a lifeline in the first installment of Evan Shapiro Amplified, an exciting journey through the M&E landscape that begins with a two-part exploration into the business of YOU and invites you to SkillUP! and unlock the full potential of your career.
The Current Media Landscape
In this first installment of the series, “SkillUP! (EM)Power Your Career – Part 1,” Shapiro provides a candid assessment of the current media landscape. He describes a seismic shift from an era dominated by Hollywood gatekeepers to one where big tech companies have seized a significant share of the industry’s value, attention and mindshare.
“We’ve passed the end of the past era, which was one that was driven primarily by the big Hollywood gatekeeping community,” he says. “And as streaming became prominent, not only did the traditional media players lose a decent amount of their power, and, frankly, underlying economics, big tech really moved in and took a tremendous amount of the value, but also attention and mindshare, out of the ecosystem for themselves as well.”
The rate of change, Shapiro says, is increasing exponentially. He warns professionals to stay alert and continually learn, or risk being overwhelmed by the pace of change.
“Change is no longer something you can schedule once a quarter, or once a year, it’s something that happens minute by minute, day by day. And if you don’t pay close attention to the rate of change, and the change around you, if you don’t wake up stupid every day and learn something new about something new every day, change becomes like that niece or nephew that you only see at Thanksgiving, suddenly way bigger than you ever thought it could be,” Shapiro warns.
Career Sustainability in the M&E Industry
In the face of this rapidly shifting landscape, Shapiro offers advice on how professionals can build sustainable careers. He emphasizes the importance of continuous learning, retraining, and recalculating one’s place in the ecosystem.
“So really, for professionals in this ecosystem, the table stakes are learning, retraining, recalculating your place in the ecosystem on a regular basis, almost treating every day as a new set of curricula, to retrain your skill set and your point of view around where everything’s going,” he advises.
He underscores the need for professionals to view their careers as their own businesses, cautioning that those who don’t risk being left behind.
“I call it being CEO of your own s**t,” he says. If you don’t look at your career as a business, one that may have one client and employer at some times, but then at other times, may have multiple clients, if you don’t see your career as more than your job, then you are very much at risk of being left behind.”
Shapiro encourages professionals to take stock of their lives and careers, in order to identify and prioritize what truly matters to them.
“Investigating them, codifying these things, is part of taking stock,” he says noting that he’s found that “very few people are ready to admit to themselves the three most important things in their lives.”
Shapiro further emphasizes the importance of prioritizing personal needs and values over job titles or roles. “The greatest title in the world is not really enough to counteract a terrible boss,” he underscores. “All the new challenges in the world aren’t really worth it if I have to commute four hours to work on a daily basis.” The pandemic especially, he says, has “taught us that life is too gosh darn short to not prioritize the things that are important to us.”
Stay tuned for Part 2 of “SkillUP! (EM)Power Your Career,” where Shapiro returns to explain how we are more than our bios, and just how powerful that idea can be.
Media cartographer Evan Shapiro explains how to unlock the power of personal branding and take control of your career path with “You, Inc.”
November 20, 2023
Posted
August 7, 2023
Philip Grossman: Production in Impossible Places
Image courtesy of Philip Grossman
TL;DR
Cinematographer/photographer Philip Grossman details his experiences adventuring around the globe and teaches you what it takes to reach, explore and capture imagery at some of the world’s most “impossible” places.
Grossman has the distinction of being the first person to fly a multi-rotor drone in the Chernobyl Zone of Exclusion as part of his work in documenting the aftermath of the nuclear catastrophe.
He offers tips for shooting in remote locations, including gear, power management, and investing in satellite maps.
Flashlights and wet wipes are just a few of the essential items in the gear bag of award-winning photographer and cinematographer Philip Grossman, whose specialty is shooting in extreme and off-limits locations.
Grossman has been engaged in a long-term project in Chernobyl that led to his involvement in the award-winning HBO series Chernobyl. He also produced and hosted a one-hour episode of the Discovery Science Channel’s Mysteries of the Abandoned entitled “Chernobyl’s Deadly Secrets,” and is working on a documentary about the Soviet Space Shuttle program in Baikonur, Kazakhstan.
“One of the things you need to do if you’re going to get into adventure cinematography, and filming in unique places, is get accustomed to things not being like they are at home and just accept them,” he says. “It’s the journey. And so that’s what makes this all fun.” Learn more from his talk above.
Grossman is both a still photographer and documentary camera operator, usually employing a Canon for the former and RED cameras for video, but occasionally he will use the lightweight RED Komodo and combine the two disciplines.
“I found that by having a different body for doing stills versus motion, it causes your brain to switch gears faster. But at the end of the day you’ve got to figure out what you can carry and fit in your bag to achieve the goal of what you’re trying to capture. I always have my iPhone with me and a GoPro because they’re tiny.”
Grossman has the distinction of being the first person to fly a multi-rotor drone in the Chernobyl Zone of Exclusion as part of his work in documenting the aftermath of the nuclear catastrophe: “The smaller drones the better. Drone footage for me it’s like Tabasco, you just want to splash a little bit into your food. If you just coat your food and everything with it, it loses its pop.”
Managing power in remote locations in the field is a particular skill. One tip (possibly illegal) involves swapping over the stickers on camera batteries so that they look like lower wattage units in order to take several as carry on an airplane.
Another tip is to turn the camera off rather than have it in “sleep” mode for power management. He also carries a USB-C charger for charging using the USB port in cars traveling between destinations.
“You just have to realize that eventually you’re going to run out of power,” he explains. “I have some friends who will go longer than a week and take solar with them but sometimes solar isn’t an option.”
There’s no substitute for preparation, however, when filming in places like off the radar NATO bases.
“Google Maps is your friend. Do not be afraid to call or contact the [US] embassy or the foreign embassy. It is government bureaucracy and will cause you to pull your hair out sometimes but they’re a great resource, and they’re really there to help you.
(Grossman has found in foreign countries that most of them are excited to have Westerners visit, especially in the former Soviet Socialist Republics.)
Do a lot of research, read lots of books, and find obscure books.
“If you’re not familiar with Wikimapia, become familiar with it. Again, great resource. It’s like Wikipedia, but it’s crowd-sourced map information,” he advises. “So if you actually go to Chernobyl, to the city of Pripyat, just about every building has been documented there. Just like Wikipedia, Wikimapia is not 100% accurate, but does give you a good sense of what’s there.”
He also pays for real-time satellite images: “It’s definitely a value.”
Often in tight or blackout situations, Grossman has learned the importance of taking multiple flashlights, which he also supplements with inch-and-a-half glow sticks, “party sticks for raves and necklaces… I got a pack of 100 for about $9 on Amazon. I always keep 10 of those in my bag at all times now because you can crack those, throw them on the ground and they become breadcrumb trails for you.”
He even has thermal scopes for some work and a satellite phone, you know, just in case.
He’ll take tripods provided they are small, compact carbon fiber built with quick-release heads that can fit into a small bag when collapsed.
Insurance is a must, but he always carries cameras with him, and will not check them into a baggage hold.
“A couple of things that I’ve found that helps. One, believe it or not, is I have a Department of Defense sticker that is on my Pelican case and ever since I put that on they have never opened it. I don’t use the TSA locks anymore. I literally just put a zip tie on it,” he says.
“But by all means check with your insurance company. There are different policies that will cover different things. Always have serial numbers for electronic items. If you have the original receipt, that’s great.”
When working with locals, or perhaps as a “gift” to officials, he recommends liquor: “Small little airplane bottles of bourbon is the greatest thing in the world.
“The other fun thing is breaking bread with the locals. We always make a point when we go. Fortunately for me, my filming partner is Polish and so his Russian is far better than mine but we always try to find locals prior to going or when we get there who can help us.”
The key here is respect, even when you have permissions to film: “Always be polite.”
Aidin Robbins and Eric Matt shot a documentary in the Alps and across glaciers. Learn how they managed both mountaineering and filmmaking.
August 7, 2023
Posted
August 7, 2023
Scott Robert Lim: How to Be a Hybrid Shooter
Image courtesy of Scott Robert Lim
TL;DR
With the increasing availability of high-quality 4K and 8K video,more and more photographers are turning to video to capture key moments.
techniques and tools that can make the process easier. He shares how to optimize camera settings for video capture, as well as how to extract high-quality stills from your video footage.
In short, don’t be afraid to embrace video in your photography workflow. By mastering the basics of video capture and learning how to understand your camera settings, you’ll be able to create surprising and impressive results that will set you apart from the competition.
Every pro and semi-pro content creator should learn how to shoot video and extract still photos if they want to make money in future, according to expert Scott Robert Lim.
“Everybody should get into video, everybody should start to become a hybrid shooter,” he says. “There are a billion video views per day on Facebook and 86% of businesses use video as a marketing tool. So if you’re not learning how to do it, you’re losing out on money.”
Lim, who is a certified educator and winner of over 70 international awards including Top Ten Most Influential Photographers, details about how stills shooters can shift into video in a masterclass presented by Sony and B&H.
This intensive session provides an overview of how stills photographers can shift their skills into video and extract still frames so that they are performing one workflow but expanding the range of platforms and publications for their work.
“I wanted to show this concept of pulling frames because I believe that this is literally the future,” Lim explains. “I know it’s a little bit early for some people but in five years, this is going to be common practice, especially as technology gets better and better.”
Every still frame in Lim’s hour-long presentation is a still pulled from video. He says that he first started experimenting with the concept during COVID and also because he realized that video is comprising nearly 100% of all media online.
“It was totally amazing to me that I was able to get a frame and then in Photoshop do my magic [and create] a totally usable image from something that I just thought was a castaway video.”
He shows how he pulled a still from a video shot at 8K and increased the resolution in Photoshop (using its AI tools) to a 32-megapixel file that any leading magazine would be comfortable using in print.
Of course, it’s a little more complicated than that. While with stills you might worry about resolution and format (along with composition and lighting, of course), if you’re pulling frames from video there are multiple parameters to juggle.
Lim deep dives great detail about frame rate, shutter speed, bit rate and compression. He also discusses color bit depth and color sample rates.
“That’s the reason why video is difficult, because the quality changes according to all these little things,” he says. “Your workflow [will mean] you create the video and then extract stills from the video so it’s a much more like the workflow is killing two birds with one stone.”
For instance, for social media the standard requirement is HD (2K) and not higher resolution. So for video for social media you will only have to shoot in full HD Or you can shoot in 4K and crop the heck out of it, he says.
“The pixel dimensions of an Instagram reel are full HD. So if I were shooting a 4K video and let’s say I didn’t want to turn it portrait, I just wanted to keep it landscape, there’s so much resolution there I could just crop the middle and I would have plenty of resolution for the reel. Or I could flip it and shoot it in portrait and then I could crop more than half of it and still have enough resolution for an Instagram reel.”
The information can be overwhelming but Lim recommends just hitting video record and see what comes out of it.
“Some of your greatest captures you’re going to find are not the ones that are the sharpest, or the ones with the highest pixel count or megapixel count. It’s the ones that you just feel when you look at them. So go out there, shoot, have fun with it.”
Watch the video at the top of the page, where Lim reveals what he calls his “magic hybrid settings” for doing both video and stills.
Announcing Our Photo+Video Lab at NAB Show New York, October 25-26
So what is it? It’s a space where the worlds of photo and video converge, where image-based, still photography fuses with motion capture, where you trade in existing for expansive, or simply find the inspiration to try something new.
Best of all, it’s a space to connect — not only with the end-to-end workflow for your craft, but with your community. Content creators. Photographers. Videographers. And so many others through photowalks, meetups, Q&A sessions, demos, exhibits, workshops and more!
The transition from still photographer to cinematographer should be simple, right? That’s what Jeff Berlin thought.
August 4, 2023
Jon Radoff: Why Creativity Will Be Critical in an AI-Driven Society
TL;DR
Game pioneer Jon Radoffspoke at the MIT Media Lab about how far we’ve extended ourselvesinto the digital realm over the last two decades, and how we’re on the cusp of an erathat will see a fundamental change to civilization itself.
In his talk, Radoff explains his vision for puttingthis creative power in the hands of everyone, and not siloing it off behind a handful of centralized gatekeepers.
AI and the search for creativity is about how things we often call “creative” are really searches for solutions, says Radoff, who details how emergent AI technologies such asautonomous agents will impact this.
We’re on the cusp of an era that will be a fundamental change to civilization itself, according to game pioneer and entrepreneur Jon Radoff.
It will be a battle for digital identity and self-expression.
“The next battleground on the internet will be between centralized AI services, and decentralized AI you could run on your own device,” he said during a presentation to the MIT Media Lab.
Radoff explained that to date our digital identities have been expressed through online game personas, social media and avatars. This has evolved toward expressing ourselves creatively such as through virtual worlds and platforms like Minecraft and Roblox.
The third era — which we are at the very beginning of, he thinks — is about empowerment through artificial intelligence, where autonomous AI agents carry out our will. But there are obvious dangers over loss of control that lie ahead.
Radoff began his presentation by asking that we should view games as the proto-metaverses.
“Games are abstractions of reality,” he says. “They have elements of storytelling, there’s some kind of shared imaginary space that has to take place to play a game.”
While most games are constrained by rules, some are not, and this potential is what excites Radoff. He identifies role playing game Dungeons & Dragons as the first to combine rules with the freedom of its players to create their own stories to expand the scope of the game.
“It is a shared imaginative space, shared place for creativity, a place you get to take on different roles. And there’s enormous emergent play. It’s emergent because you can’t fully predict the outcome.”
The metaverse takes this concept of emergence into digital form and allows participants “to cross time and spatial barriers,” Radoff says. “The metaverse give us a place where we can go through similar imaginative experiences without having to necessarily meet in person.”
Online multi-player games like World of Warcraft or world-building platforms like Roblox and Minecraft are inherently social, Radoff argues. It is the social connections that players are forming here that gives us clues as to how the metaverse will grow.
Just as the number of social interactions are essentially infinite so the emergent nature of these virtual worlds is infinite and infinitely unpredictable by design. It is something not possible with the closed HTML-based websites of the 2D internet but the seeds of how we will navigate its 3D successor are already here.
And this has individual digital identities at its core.
“Most people are participating in online games, or in social media, maybe you’re an eSports, maybe you’ve done online dating, maybe you’ve participated as a viewer, or even [participated in] live streaming. Maybe you’ve done some cryptocurrency. Maybe like me, you’re capturing your biometrics 24 hours a day and uploading it to the internet, where AI figures out how to tell me how to live a better life,” Radoff explains.
“The key idea is that our identities are now very much comprised not only of who we are physically, but who we are digitally. And that’s changing a lot of the trajectory of human civilization.”
If the current phase of our online ID is a presence on Twitter or Facebook, then the next step is our expression. It’s about what we put out online as digital beings.
Again, the first steps of this journey are already being taken in the form of digital twins. Radoff notes that just as we’re projecting our personas into digital space, we’re starting to take physical things with us into the digital space too.
“We’re going to have more and more digital twins of objects in the real world that you can scale up [online]. If you can do it in a factory, you could do an entire city. Why stop at a Smart City, when you can do a twin Earth?”
As we digitize more of the physical world replete, so the virtual world will impact the real world in what Radoff imagines will be a virtuous feedback loop.
“The idea of shaping worlds and exposing ourselves to them and allowing us to shape experiences that then affect us as well, such as creating an avatar online wearing it in real life. We’re starting to blur the distinction of like who we are with our digital personas online.”
But there’s yet another phase and it involves AI. Generative AI will amplify the whole virtual/real crossover and multiply the speed at which it is built. The question then becomes can we as individual humans retain agency over our own identities? Radoff thinks we can.
His solution lies in providing a framework for humans to retain agency — the power — to change and control AI. In this respect Radoff seems to agree with Web3 advocates who envision the next generation of the internet as the last chance for society to build a more equitable distribution of labor and reward.
“When I talk about projecting our will onto the online world through intelligent agents, I’m also thinking about our own agency about interpreting the online world. Right now, in the centralized version of the world [aka Web2], it’s really governed by algorithms whose objective function is revenue and EBITA for an organization. It’s totally fine. I’m a capitalist, so I get it. But I personally want to live in a world where it’s my online experience that is optimized around the objective function that I set,” he says.
“For instance, if my intelligent agent wants to let me know that it discovered a product or service that I ought to consider paying for, because it’s looked at all the information available, and its pattern match that based on my criteria to what I want.”
While not necessarily convinced of current iterations of blockchain or Decentralized Autonomous Organizations (DAOs), he does think these are examples of ways to create new governance systems that work for everybody.
“We could debate the pros and cons of whether that makes sense in all or in particular cases but nevertheless, DAOs are a social system, an emergent social structure, which I think is interesting to look at. So it’s interesting to think about what happens with the agents that represent us online? And then how do we form governments around that?”
Such theorizing becomes urgent when you consider the amount of deepfake content circulating online with few guardrails in place for people to detect fiction.
An article in Wired by Thor Benson titled ”This Disinformation Is Just for You” worries that generative AI won’t just flood the internet with more lies — it may also create convincing disinformation that’s targeted at groups or even individuals.
Hany Farid, a professor of computer science at the University of California, Berkeley, tells Benson this kind of customized disinformation is going to be “everywhere.” Though bad actors will probably target people by groups when waging a large-scale disinformation campaign, they could also use generative AI to target individuals.
“You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming,” Farid says.
In the lead-up to the 2024 US election, Facebook’s algorithm — itself a form of AI — will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.
“We’ve been pretty well tricked by very low-quality content,” Kate Starbird, associate professor in the Department of Human Centered Design & Engineering at the University of Washington tells Benson. “We are entering a period where we’re going to get higher-quality disinformation and propaganda. It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”
While recognizing the dangers inherent in unrestrained AI, Radoff is concerned that rampant regulation might stifle innovation.
“There are, of course, real safety issues that are of concern but I also don’t want to throw out the potential for all of society, including civilizational improvement effects, that will be a huge net benefit to us.”
He continues, “We already have a deep fake problem… We’ll have to have first defensive technologies. This knowing the authenticity of content and who it came from, is going to be very important.
“The fear [about AI] is real and palpable, and it will drive politicians towards reacting to that potentially in a way that isn’t productive for innovation, and may even be a net worsening of safety. The rush to regulation worries me a lot.”
Generative AI is disrupting the creation of art in all its forms, forcing commercial artists and other creatives to choose a side.
October 4, 2023
Posted
August 2, 2023
Whether You’re a Photographer or Cinematographer or Both, Here’s How to “Think Like a Filmmaker”
From “The Wrangler,” a short film shot by Jeff Berlin
The transition from still photographer to cinematographer should be simple, right? Cinematography is just 24 still images per second.
That’s what Jeff Berlin thought when he started the process of expanding from successful fashion and editorial photographer, shooting internationally recognized talent in beautiful locations throughout the world to becoming a professional cinematographer.
But as he explains in this talk presented by B&H Photo, Berlin learned that the two jobs have quite a few differences, both in terms of the approach to the artistry and the specifics of the job description.
After presenting some of his work in fashion and celebrity portraiture, and his back story peppered with info about the six years he’d run off to plum assignments from his home bases in Paris or Milan, he talked about a number of the still photographers who provided him with references and inspiration in his work.
On the path to becoming any kind of visual artist, he says, “you develop and cultivate your sensibility, and you educate your eye.” In the world of stills, he developed a strong familiarity with such greats as Irving Penn, William Eggleston, Dorothea Lange and Richard Avedon.”
While those artists’ work will likely always inform Berlin’s, as a cinematographer, he says, you’ve got to “find your references.” He speaks about classics of cinematography such as Days of Heaven from director Terrence Malick and shot by Nestor Almendros and The Danish Girl, directed by Tom Hooper with cinematography by Danny Cohen — “It’s just a really, really lovely film,” he enthuses — or Mike Nichols’ The Graduate, for which Robert Surtees served as cinematographer. These and a number of others, he says, “have become touchstones.”
Cinematographer Steven Bernstein (Monster, directed by Patty Jenkins), “has been my mentor through a lot of this journey,” Berlin says, noting that the DP was among the first to explain to him some of the differences between the two skills.
Berlin says he comes “from a world where you’re looking to shoot the most beautiful images,” but while that is sometimes what directors are looking for it certainly isn’t always.
Now that he’s shot a number of different projects, he references a director’s treatment for a short film he shot. “‘I don’t like super sharp images. Ever.’ People talk so much about resolution and sharpness, but that isn’t necessarily what filmmakers want to tell their story.”
Berlin also touches on the different vocabulary in the two fields. “A tripod,” he says, “becomes ‘sticks. People don’t talk about the F-stop; they talk about the T-stop. There are [cinematography-specific] composition terms such as French Overs and “Dirty Overs.”
Not that it’s terribly challenging learning the new argot, but it can be interesting going from being a highly in-demand still photographer to having to learn some basic terminology to shoot motion pictures.
Of course, cameras are an indispensable part of either profession and Fassbinder goes into some depth about his findings as a cinematographer seeking “the best camera for the mission.”
The very top tier of motion picture camera is perfect for situations where something costing many tens of thousands of dollars and requiring a certain size crew to just to move it around and ensure its smooth operation. But there are some cameras costing only a few thousand dollars that can be perfect for shoots requiring a smaller crew and more modest footprint. Often, he points out, a number of features and TV productions mix and match.
Berlin speaks in terms of the Sony gear he uses, but the underlying concepts can easily be transposed to equipment from other major manufacturers.
He speaks about the image quality, from sensor to encoding, of his Venice 2, which completely “kitted out” runs about 90 grand, and the FX series (3, 6 and 9), the cheapest of which costs about $4000.
Berlin also talks about dual base ISO, which many Sony cameras (as well as Panasonic and others offer) have, which can allow for extreme low-light shooting and day exterior cinematography without the most of quality sacrifices involved in the traditional approach to exposure index, which has generally pumped the gain way up to enable really low-light shooting.
And he goes into the importance of ND (neutral density) filters as a way of controlling exposure in brightly lit conditions without having to change T-stop (and thus alter the depth of field) or ISO setting. (Sony cameras’ built-in ND filters do offer a convenience some competitors do not).
In sum, he says, it’s comparatively easy to be a solo still photographer. “When you’re in a studio doing a campaign, you have your crew, you have your hair, your makeup, your stylist, your assistant…I would sometimes have three assistants, depending on the kind of job that I was doing.
“But filmmaking is always a team sport. The still photographer is the director, the DP and the gaffer. On a film set, everyone has their own role.”
Furthermore, “as a still photographer, you really want to have a style that identifies you, that that individualizes you and gives … an identity to your work. A cinematographer really shouldn’t have a style; you are there to support the vision of the director.”
Futurists Agree “AI Won’t Be the Hollywood Version”
From director Paul Verhoeven’s “RoboCop,” courtesy of Warner Bros.
TL;DR
The speed at which AI is advancing has shocked most experts in the field but others think our fear is misplaced and that actually there’s a lot to be optimistic about.
By 2027 nearly a quarter of the workforce will be disrupted by AI in some form — but that means augmented or assisted by AI, not necessarily automated.
AI is not a single entity but best thought of as a plurality of AIs, each with specific jobs, some of them will have consciousness.
The speed at which AI is advancing has shocked most experts in the field but others think our fear is misplaced and that actually there’s a lot to be optimistic about.
One of them is futurist Sinead Bovell, who contends the current challenges of AI are like the internet in the early ages of email.
“Since we don’t really know how things are going to transpire how things are going to evolve we’re tuning in a lot to Hollywood’s version of the future.
“Of course, some dystopian futures are possible but I don’t think that’s where we necessarily have to end up,” she said. “There are a lot of amazing people working on things like AI safety, and alignment. So I think we have a we have a good shot, if we can get our act together.”
Bovell was speaking on the “Futurists” episode of Bloomberg’s AI IRL podcast, where she predicted that nearly a quarter of the workforce will be disrupted by artificial intelligence over the next five years. But that doesn’t necessarily mean their jobs will be eliminated by automation — more like augmented by AI.
“For sure, certain tasks will get automated, but that’s different than an entire job,” she says. “It doesn’t matter what job you’re in, you have to figure out how to start using AI tools.
“Over the next 15 years, most of the jobs [impacted by AI] probably haven’t been invented yet — like a social media manager didn’t exist 15 years ago. And now, if a company doesn’t have one it’s toast.”
Proclaiming himself to be “very excited” and “incredibly optimistic” about the future of AI, Kevin Kelly — a senior Maverick at Wired — likens the transformative power of AI to electricity, the printing press, and even language.
“I’m optimistic because so far the benefits certainly outweigh whatever negatives and problems there are,” he told Bloomberg in the same episode. “I think that the problems are smaller, and fewer than we think [and] I think our capacity to solve the problems are greater than we think. So just as AI’s problems are new and powerful, our ability and will to solve them is also increasing.”
Nor does he think that the change of AI on society will happen as fast as some fear.
“This a centuries-long journey that we’re on. We’re gonna be having this conversation for the next century. So we have time to adjust and we’re already rapidly adjusting to these things as [new versions] come out within months. The versions are incorporating the objections that people have — whether it’s copyright or bias — and that’s one of the reasons it gives me optimism about our ability to control this as we go forward.”
Kelly points out that AI is not a monolithic entity.
“There are going to be many AIs, many varieties, many species [of AI]. We’re seeing that happening already. The kinds of AIs that might drive your car can be different from the ones that are doing the translation from one language to another in real time, which might be different than the ones that you’re using to make an image. We certainly can generalize some aspects of them but I think it’s very important to make sure that we talk in plurals.”
Some of these AIs are going to be conscious, he predicts, but this will be added in deliberately by humans for specific use cases.
“Some of them may have a little bit of consciousness [but] it’s not binary, it’s kind of a gradation with many varieties. Consciousness is not necessarily something we’re going to put into most AIs, because it’s a liability in most cases.”
Aimed at empowering creatives, the AI Creative Summit will be held at NAB Show New York Oct. 24-25.
July 31, 2023
How Steven Soderbergh Brings It All Together for “Full Circle”
Timothy Olyphant and Claire Danes in the Max limited series “Full Circle”
TL;DR
In a wide-ranging discussion covering AI, branching narratives, and shooting efficiently, director Steven Soderbergh looks back on the production of Max’s limited series, “Full Circle.”
The series used the new RDX System from Rosco for the virtual production of apartment interiors.
Neither Soderbergh nor collaborator Ed Solomon believe AI will ever match up to the creativity of the live human experience — but that doesn’t mean they won’t use AI as a tool.
Full Circle, a six-part series that just completed its run on Max, is a melodramatic crime drama series with interconnected storylines and hidden secrets, taking viewers on unexpected twists and turns.
Director Steven Soderbergh collaborated with writer Ed Solomon and together they talked about the project during an hour-long roundtable with a handful of trade outlets.
On Shooting Long Takes
One of the hallmarks of the show’s visual style is a tendency toward long takes that present the action at a distance without punching in excessively for close-ups. According to Soderbergh, as Jim Hemphill reported in IndieWire, those long, intricately choreographed takes have a practical component as well as a desirable emotional effect: They allow him to work faster.
“The thing that takes time when you have a lot of work to do in a day is unnecessary coverage,” the director said. “If you can rehearse and block and stage something and know where the cuts are coming before you’ve shot it and you don’t capture any redundant material and you’re not doing 20 of 30 takes of stuff, you can move pretty quickly.”
In the endeavor to shoot efficiently, much of the show’s interiors were shot on a volume stage. While Soderbergh initially hoped to shoot these scenes on location in an apartment near New York’s Washington Square Park, various factors led to the production opting to shoot inside a sound stage instead. Instead, the production team decided to use the new RDX System from Rosco.
Phil Greenstreet, Rosco’s head of development for backdrops & imaging, went on the location scout around the apartments near Washington Square Park and shot hundreds of images with a Fuji 100 GFX camera. The apartment set was modified with long hallways for Soderbergh’s roving camera.
“They didn’t want to be messing with motion,” Greenstreet told Bill Desowitz at IndieWire. “They didn’t even want motion in the background, so the flags weren’t moving, the cars weren’t moving, you only see small slivers of cars in the distance anyway.”
Soderbergh explained to IndieWire, “I love what you get from [RDX] and the ability to go from one look to another in a matter of seconds. Literally, I can move the image around, I can adjust the contrast, I can adjust the brightness, I can blow things up, I can shrink them. There’s no other way to get this interactive, refractive light bouncing around the room off the surfaces with that kind of technology.”
Soderbergh and Solomon originally intended Full Circle to be a branching narrative like their 2018 HBO series Mosaic, which gave viewers the option to choose different outcomes for the story via the app.
“On Mosaic, we were able to do that, because that was repurposing the footage to use in both ways. I was using the same footage for the linear version that I was using for the app. That’s why that was not a problem,” Soderbergh explained during the roundtable, as quoted by The Hollywood Reporter’s Hilary Lewis. “My vision for the app version of Full Circle was completely different imagery, completely different approach directorially, different cameras, different everything.”
The Full Circle script was 400 pages, Soderbergh said, with the app version consisting of an additional 170 pages “in which there’s no overlap,” he said.
“I can shoot fast, but I cannot shoot that fast. We had to throw all of that away [though] some of those 170 pages leaked its way back into the linear version.”
Zazie Beetz in Max series “Full Circle.” Cr: HBO
CCH Pounder in Max series “Full Circle.” Cr: HBO
Claire Danes in Max series “Full Circle.” Cr: HBO
Dennis Quaid in Max series “Full Circle.” Cr: HBO
Timothy Olyphant in Max series “Full Circle.” Cr: HBO
Timothy Olyphant and Claire Danes in Max series “Full Circle.” Cr: HBO
Timothy Olyphant, David Wilson Barnes, Russell G. Jones, Zazie Beetz and Claire Danes in Max series “Full Circle.” Cr: HBO
The process made Soderbergh question whether there’s any real place for branching narratives in narrative storytelling.
“It’s not clear to me that this form of storytelling is needed or even wanted by audiences. In a primal sense, around the campfire or a dinner table, if somebody pulls the attention of the group to tell a story, the people in that group are expecting and wanting to hear a story that resolves itself. They don’t want to hear somebody tell a story at a dinner table in which they go one way, and then they back up and go, ‘Or it could go this way.’ That’s not what you want. I think there’s a very strong impulse for people to want to be told a story like, ‘You’re the storyteller. Tell me a story. Don’t make me do the work. That is your work.’
“That’s what I’m beginning to think. So it’s a real question whether or not I would return to that format without an idea that I feel can only be executed properly in that format.”
“That’s what I’m beginning to think. So it’s a real question whether or not I would return to that format without an idea that I feel can only be executed properly in that format.”
Asked for his thoughts on AI, Soderbergh said it could be helpful as a tool but he has doubts about AI’s ability to mimic the lived human experience.
“It doesn’t know what it means to have a flight cancelled and have to figure out how to get home,” he said, as quoted by Christina Radish at Collider. “At a certain point, that’s a real problem. You have to remember, its only input is data, text and images. It has no body temperature. It doesn’t know what it means to be tired.”
He added, “I think it’s useful for design creation… as a basic way to accumulate a framework. Let’s say it writes a script and it’s supposed to be a comedy script that ChatGPT has generated, and you say to it, ‘It needs to be funnier.’ And it says, ‘How?’ And you go, ‘I don’t know, it just needs to be funnier.’ What does it do? It’s just a tool. But if you asked it to design a creature that’s a combination of a cat and a Volkswagen Beetle, it can do that. That’s fun.”
This naturally segued onto a discussion of AI’s implications on industry jobs. Solomon doubled down on his belief that art made by human beings cannot be replicated.
“The problem is, the people making decisions on the highest level are all about the bottom line and “How can I get rid of as many human beings as possible?” [and they] don’t have the ability to judge what is good art and not good art. If we don’t draw a line in the sand now, my fear is we’re going to continue to a place where a lot of people are [going to be] out of work.”
But while viewers may have originally tuned in to see Claire Danes, Dennis Quaid, Timothy Olyphant, Zazie Beetz and Jim Gaffigan, those who stayed with the limited series saw a story take about two Guyanese teenagers take center stage.
“You think it’s about this group of well-off white people being victimized. And then over the course of the show, the whole thing starts to tilt,” Soderbergh told the group of reporters, as quoted by THR. “By the end of it, we’re in a very different place than where we started. So it was this melodrama that had this very interesting subterranean thematic thread bubbling along that eventually comes up and takes primacy in the last two episodes.”
The series ends with the lead Guyanese characters walking around the unfinished Colony at Essequibo, the ill-fated development that connected them with Danes’ character’s family, and a pan over to a billboard advertising that the aborted project is “coming in 2003.”
“From the very, very beginning of the script, it was all engineered to that one last shot,” Soderbergh said.
Critical Reception
Whether moving from character to character or balancing suspense and action, Full Circle thrives on efficiency, reviews Ben Travers of IndieWire.
“Taken as a creative twist on a tried-and-true format, it balances the experimental and the satisfying in a way TV should strive for more often, especially in an era when filmmakers are being asked to create content. If you’re going to churn out stories for streaming, you may as well maintain your artistic credibility.”
Gazing inward, Season 6 of media-tech satire “Black Mirror” contends that people prefer viewing content “in a state of mesmerized horror.”
September 19, 2023
Posted
July 31, 2023
“Sympathy for the Devil:” Nicolas Cage Makes Everything Better, Even LED Volumes
Video courtesy of Vū
TL;DR
Much of the new Nicolas Cage movie “Sympathy for the Devil” takes place inside a car, equating to more than a third of the film’s total screen time.
Initial plans to shoot these car scenes on real streets were disrupted by persistent torrential rains at the Las Vegas location, prompting the production to shift towards virtual production inside the Vū virtual studio.
Director Yuval Adler and cinematographer Steve Halloran found that using Vū’s LED volume for these shots provided better and more authentic visuals faster than possible with traditional on-road or greenscreen filming.
The Vū studio allowed for genuine reflections on the car’s glass, enhancing the realism of the shots. This shift in filming methodology led to a significant increase in efficiency, with Adler obtaining about 12 minutes of material in a day compared to potentially just one shot using traditional methods.Learn more in the video above.
A significant portion of the new Nicolas Cage movie, Sympathy for the Devil, unfolds inside a car as Cage’s terrifyingly histrionic character forces the driver (Joel Kinnaman) to drive him at gunpoint, committing horrible acts of violence along the way.
While writer/director Yuval Adler initially planned to shoot those scenes with a real along with the attendant headaches and delays of street closures, camera cars and rigs, his plans were undone by the prolonged torrential rains that plagued the Las Vegas location. So, the production decided to pivot and shoot all the car shots, which comprise more than one third of the film’s 90-minute screen time, inside the Vū virtual studio.
Yuval and cinematographer Steve Halloran report that shooting these shots from inside an LED volume enabled them to get better-looking shots much more quickly than they likely could have shooting out on real roads or on a greenscreen stage.
“When you have green screen or blue screen behind a car,” Halloran says, “it’s a lot harder to get authentic reflections in the glass. You put a car in the volume and put [a virtual] environment around it and reflections on top,” and you’re already closer to a shot that looks real.”
From “Sympathy for the Devil,” starring Nicholas Cage. Cr: RLJE Films
From “Sympathy for the Devil,” starring Nicholas Cage. Cr: RLJE Films
From “Sympathy for the Devil,” starring Nicholas Cage. Cr: RLJE Films
From “Sympathy for the Devil,” starring Nicholas Cage. Cr: RLJE Films
From “Sympathy for the Devil,” starring Nicholas Cage. Cr: RLJE Films
From “Sympathy for the Devil,” starring Nicholas Cage. Cr: RLJE Films
Yuval much preferred shooting in front of the LED wall “as opposed to being outside in a car, which is an absolute nightmare.” Comparing the two approaches, the director says with the former approach a shoot can wind up with something like one shot in a day, while on Sympathy, he was able to come away with about 12 minutes of material in the same timeframe.
In an interview for the Panavision website, Holleran outlined his creative approach further, noting that he got his start as a fine art photographer, inspired by the work of photographers like Henri Prestes and Christophe Jacrot, “who work heavily with surrealistic, hazy textures,” he said.
“I wanted there to be something akin to a soft veil across the image, as if we were in a nightmare upside-down world. The other reference was Las Vegas’ Neon Museum, which is a great boneyard of old neon lights from the city. Walking the museum at night was a wonderful playground for color inspiration, say the way primaries fade and turn strange with time. Then ultimately, it’s the themes in the film that have the final say.”
The DP also detailed his decision to shoot with Panaspeed optics sourced from Panavision Woodland Hills. “Often when choosing lenses, I start with two sets of parameters which don’t overtly align,” he said.
“On Sympathy, my first set of needs were lenses that were fast, lightweight, and had a range that leaned towards the wide side. This instantly cut out a large chunk of glass, much of it vintage, some modern. Then I wanted a specific creative look, for instance a set of glass that bloomed the highlights, had heavy halation, lifted blacks, with a cat-eye effect. Those prerequisites didn’t already exist together or were not readily available, so we turned to the modern Panaspeeds for their customizability, so we could ‘tune’ a look into a set of lenses that matched my technical specs.”
Sympathy for the Devil is “a surrealist pop-nightmare thriller set on the forgotten edges of Vegas,” Holleran says. “It’s got melancholy and rage fighting for space in front of the lens. We’re exploring the in-betweens of good and bad, truth and lies, past and present, right and wrong — both in the script and with the cinematography. It has upside-down, head-turning relativity at its heart, with who is good and what is true coming into question, and so the film’s look leans into asking these questions of the audience subjectively through composition, movement and color choice.”
Sympathy for the Devil, a film Yuval promises will offer “dark humor, violence and truth,” is open in theaters and available on streaming platforms. (Of course, those attributes are icing on the cake when you’re talking about a movie with Nicolas Cage doing full-on Nicolas Cage.)
LED cinematographer Erik “Wolfie” Wolford presents an in-depth demonstration of virtual production using LED walls at the Entertainment Technology Center’s vETC conference.
Wolford’s demo perfectly matches real and virtual lighting to create a realistic illusion of an actress standing on a sunlit beach, all controlled in real time through Unreal Engine.
His technical setup includes a HP Z6G 5G desktop workstation equipped with a Sapphire 1 Intel Xeon Processor and a pair of high-end RTX NVIDEA 6000 graphics cards running Unreal Engine.
Kino Flo’s new Mimik lighting, designed to create full spectrum foreground lighting for virtual sets and overcome the limitations of LED walls’ light for skin reflections, automatically adapt to scene changes inside Unreal Engine.
One of the hottest topics of conversation in the filmmaking community of late has been about the use of LED walls, or volumes, in production.
But few people have explained it in the level of detail as LED cinematographer Erik “Wolfie” Wolford, who this past June shared the real nuts and bolts of how he handles virtual production. His talk, “LED Stage Architecture: How It’s Built,” was presented during a session of the Entertainment Technology Center’s virtual conference, vETC.
Wolford, who’s shot music videos and documentaries, recounts his career path starting at the bottom rung on the production ladder on music videos for such visionaries as Spike Jonze and Michel Gondry. He eventually moved into lighting for special effects, particularly for green screen work, on features, commercials and music videos before focusing his creative energies on virtual production in his current role of LED cinematographer.
His first job involving Unreal Engine was the 2022 animated short Mr. Spam Gets a New Hat from director William Joyce and international VFX company DNEG. The 2D animators working on the project, he recalls, had to adapt to the challenges of 3D in the virtual world. “I came in and moved lights around, changed the size of the lights to make them wrap better, added a lot of shadows, and soon I was hooked!”
As Unreal Engine started to be used more for virtual production or ICVFX (in-camera visual effects), he was all over it.
To illustrate his talk, Wolford created a setup with an actress positioned in front of a digital wall. The real lighting on the actress and surrounding stage was matched with a virtual set of a beach, with the game engine controlling the interaction of foreground and background in real time. So convincing was the setup that the actress appeared to be standing on a sunlit beach, complete with a cliff and the ocean in the background. The illusion’s success lay in the seamless lighting — every element, real and virtual, seemed to be lit from the same source.
He breaks down the setup for the attendees. Unreal Engine is running on an HP Z6G 5G desktop workstation with a Sapphire 1 Intel Xeon Processor (Sapphire Rapids) and a pair of high-end RTX NVIDEA 6000 graphics cards. One engine runs Unreal Engine doing all the 3D movement based on the position of the camera, “just like first-person shooter games,” he explains, adding that changes in the positioning, depth of field and focal length on his real camera are immediately translated to the 3D background.
Then, there is another box which the first one feeds the signal into and that sends the signal out to through an ATM switcher to yet another box, which serves as the editor node. This third box feeds the signal into a Brompton Technology processor, which takes the background image and breaks it into many squares to deliver each square to an individual one-by-one panel.
The signal then goes back to the record decks and feeds the Megapixel Visual Reality Helios LED Processing Platform running special new lights that Kino Flo lent for this demo, called Mimik Lighting, designed specifically to allow users to create full spectrum foreground lighting for virtual sets. The Mimik lights allow the user to use the same technology that the LED wall uses but purely to create interactive lighting on the set.
“When we work with LED walls,” he says, “they look great to the eye. They look great to the camera. But they don’t create light that’s great for skin reflections.” To create the illusion that the set wraps around and you want the same color space as the set, so the Mimik light gives a full CRI (color rendering index) image — essential when using LED panes as a light source. “The reds look really red; it looks pretty good on skin. I can simulate more walls giving me the color I want, and it helps tie the actor into the magic of the scene.”
While Wolford also likes to use a lot of lights from Aperture, which also have an excellent CRI, are efficient and portable, and are easily managed with a smartphone, but, he says, “it takes me about 10 minutes to tune in a color. Then, if I change the scene from daytime to nighttime, I have to go change all my lighting.” Instead, he explains, “the Kino Flo Mimiks let me put the Unreal scene right into the Mimik light. Then the Mimik light just automatically updates as I change [the characteristics of the] scene. If I go from day mode into night mode, it will go into night mode. If the scene [on the LED wall] is a sunset, it will automatically generate sunset light.
“These Mimik lights,” he says, “are half LED wall and half actual proper film set light.”
He elaborates on the specific scene he’s set up for the demo: “We’re doing a little campfire beach scene, so we have a virtual light on the screen behind doing a virtual flicker of fire on the wall.” Then he places a physical Mimik light on the actress, which helps sell the illusion that this is all a real scene on a beach.
He demonstrates the real camera and the virtual camera running off Unreal that captures the motion of the real camera as it trucks left or right or dollies closer to the subject. The scene on the LED wall compensates with all the appropriate parallax applied to the scene, again, as he points out, “just the way a first-person shooter game works.”
Digging deeper into the technology, Wolford notes that the virtual camera’s positioning is assisted by a mo-sys box using its IR reader, which “observes” little stickers on the real camera by sending out an IR beam and uses the IR reflection to determine the relationship of the box and the real camera in order to translate that into positioning date for the virtual scene.
Capturing lens data, such as focal length on the zoom and aperture, is either done using a similar device on the camera’s lens that makes use of a gear apparatus to track the positioning of the lens barrel and sends that data into the Unreal Engine or, in the case of some newer lenses, that type of lens metadata is captured directly as part of the lens mechanism. Either way, zooming in or out, opening or closing the iris is translated into data, fed into the Unreal Engine and the image on the LED wall is affected accordingly.
At this point he explains an important word in his field — the frustrum, which refers to the area of the LED wall that the real camera is seeing at any given moment. “It’s really computationally heavy to generate all the date for the entire wall,” he elaborates. “Any way we can save on computational power, we do, and one important way by only generating data on the screen when and where we need to see it.”
Explaining his methods for lighting performers, sets and props in front of the LED wall, Wolford says he relies on a lot of techniques he learned for lighting the foreground elements on a greenscreen stage. The idea of interactive lighting that sells the effect and makes the live action elements seem to exist in a common space as the background effect is an equally vital aspect in both types of VFX cinematography.
“If I’m lighting a greenscreen [stage] for a J. J. Abrams movie,” he says, “they’ll send me a picture of the effects background” — a style guide, he says — “and I’ll be like, ‘OK they’ve got sun coming from the left, there’s a big fireball that’s going to happen on the right and it’s more a reddish fireball than an orange one… so I’ll put a key light from the left and a reddish effect light on the right to simulate the fireball.
“Now I just look at the LED wall,” he says, “and I have my style guide right there.”
At the conclusion of his presentation, Wolford took a series of questions. Answers revealed additional tidbits, including the fact that the wall the audience was watching utilized a 1.9-pixel pitch (the distance from the center of any one pixel to the center of an adjacent pixel is 1.9 mm. The smaller the number, the higher the resolution of the image). To contextualize, he explains, “You go to Coachella and see a big video wall. That’s going to be a 3.9 pitch. This is 1.9 pitch, so these pixels are very tightly packed together.”
Asked about Unreal Engine’s primary competitor, Unity, he said that Unity’s graphics engines have dominated the cell phone game area and that the company made a significant move into high-end VFX when they purchased highly-respected New Zealand-based VFX company Weta Digital (the lead VFX house for the Lord of the Rings films), but he stresses that Unreal Engine and parent company Epic Games, “Have been very aggressive in the video wall space, through training and teaching,” so while he’s aware of that some LED walls based on Unity technology exist, he says they are rare and he’s never encountered one.
It Was Just a Beautiful Dream: Virtual Production for “Live Again”
From the Chemical Brothers’ “Live Again” video, directed by Dom&Nic.
TL;DR
With the music video for the new Chemical Brothers single “Live Again,” Outsider’s Dom&Nic and Untold Studios used virtual production and real-time VFX to create a surreal “Groundhog-Day”-esque adventure story.
Outsider and Untold teamed with virtual production specialists from ARRI Solutions, Creative Technology and Lux Machina, with the shoot hosted on and powered by ARRI Stage London.
Filming lengthy takes against CG backgrounds that change in real time without the need to cut camera, the promo breaks fresh ground.
A new music video broke fresh ground by filming lengthy takes against CG backgrounds that change in real time without the need to cut camera.
“Live Again” is the tenth collaboration between British dance band Chemical Brothers and director duo Dom&Nic.
“It’s a trippy Groundhog Day-esque adventure story through multiple environments in a continuous dance performance by Josipa Kukor,” describes Promonews.
To achieve it they filmed long unbroken shots with background virtual environments switched live without edits.
“The woozy, wonky analog sounds and the dream-like lyric suggested a hallucinogenic visual journey following a character caught in a loop of death and rebirth. The hero in the film wakes or is reborn in a different environments ranging from deserts to nighttime neon city streets and cave raves to Martian worlds,” the directors told Promonews.
“This is an idea that could not really have been achieved with traditional filmmaking techniques. We created virtual CGI worlds and used long unbroken camera takes, without edits, moving between those different worlds seamlessly with our hero character.”
Dom&Nic’s production company Outsider brought together cinematographer Stephen Keith-Roach, production designer Chris Oddy and VFX facility Untold Studios, along with virtual production specialists from ARRI Solutions, Creative Technology and Lux Machina, all hosted on the ARRI Stage London.
Dom&Nic say that the band encouraged them to capture the feel of the track in the cinematic texture and look of the film.
“We were given the challenge to give it the visual equivalent of putting a clean sound through a broken guitar pedal to transform and degrade it into something unique. We love the way the film has an analog and messed up film look to it, it really adds to the visual trippy experience.”
Untold Studios real-time supervisor Simon Legrand added, “After designing seven bespoke virtual worlds in pre-production, we were then able to tweak elements on set, on the fly, giving the directors the freedom to play and experiment. This is the first time that virtual environments have been switched live on set in this way.”
Will Case, director of innovation at Creative Technology, confirmed, “It really pushed the boundaries of working in real-time workflows and technologies to bring to life Dom&Nic’s visually stunning promo.”
In an interview for the ARRI website, Dom&Nic described how they approached the project knowing that virtual production would be part of the mix. “Being immersed by ARRI Stage London and its walls of screens for the first time was very impressive,” they recalled. “You start wondering how to use the space and the technology to create a narrative that couldn’t be shot in any other way.”
The new technology helped inspire some of their ideas, but it also clearly demonstrated constraints demanding creative solutions, which ultimately led to the promo’s unique style. “Building a set that would transition for all environments was a creative challenge, but one that developed the idea further. For example, a desert floor could become a construction site in a city — once we had worked through that process, things started to tie together,” they said.
“Connecting the Unreal environments with the actor and set wherever possible while using whatever tricks and ideas we could imagine was important,” the duo added. “For example, CGI tumbleweed enters the frame, rolls around the physical set and then off into the 3D environment background. A black, disc-like object in the sky does the opposite: starting in the distance, in the Unreal environment, then right over the head of the actor in the set. Lighting was synced with the camera, and the black disc was also integrated as real-time VFX on the LED wall. This meant our actor could perform and react to the final image, which looked as ominous and convincing on the Stage as it does in the final film.”
Image courtesy of Will Case at Creative Technology
The primary challenge, however, “was taking an unbroken shot through different environments without cutting or using greenscreen. Pre-shoot, we used Unreal Engine to design a range of immersive environments, so we could work out the space for our physical set and get a sense of how our actor and props could be positioned.”
The directorial duo anticipated that it might be difficult to develop “a clear working translation between a traditional camera crew approach and the virtual production elements,” but that wasn’t actually the case. “Our DP, Steven Keith-Roach, worked with the Stage teams to light the scenes virtually, and with gaffer Kevin McMorrow to use practical and studio lights inside the caravan, which worked seamlessly.”
Given that LED panels have a softer light than natural daylight, the duo employed softer lighting setups that worked well on the physical stage and also helped the blend from set to screen, they said. “It was a very quick process to move the sun across the sky or pop it behind a cloud.”
The Stage’s wraparound design with real-time camera tracking and lighting represents “a great leap forward from traditional greenscreen,” Dom&Nic enthused, with “no edges or spill, and perfect reflections.”
The process for seamlessly integrating CGI and real-world image-based lighting for the promo’s actors, highly reflective set, props, and wardrobe represented the biggest advance, the duo said. “The fact that foregrounds and backgrounds are shot in-camera, with no need to composite later is the icing on the cake!”
AI Is Everywhere: Where Did It Come From, Where Is It Going?
TL;DR
Video Learning Lab host and technology advocate Gary Adcock discusses the evolution and context of AI, stressing on its pace and public awareness.
AI, as defined by Adcock, is a branch of computer science teaching machines to mimic human intelligence.
Adcock notes AI’s prevalence in everyday technology, from social media filters to Tesla’s Waymo system, emphasizing the role of user data in training these systems.
Noteworthy AI applications include stock-picking on investment platforms, enhancing security systems, and the potential for breakthroughs in medical diagnostics.
Adcock presents Coca Cola’s AI-generated “Masterpiece” commercial as a prime example of AI’s capabilities in creative endeavors.
Technology advocate Gary Adcock hosts a session in our new Video Learning Lab about the evolution of AI, with the terms he prefers: augmented intelligence and machine learning.
This video offers context and historical perspective to a topic that’s currently resounding throughout the world. It’s being met simultaneously with exultation about its possibilities and existential dread about the potential to perform tasks beyond human capability, as well as those of which humans are capable, but for which we may suddenly become totally unnecessary.
People have seen services such as ChatGPT and Midjourney evolve significantly since most people first noticed their existence earlier this year. In fact, while the technology seems to be moving at light speed, Adcock observes that the phenomenon that has moved the fastest of all is awareness of the technology.
So… what’s AI? Adcock summarizes the overarching concept as “a wide-ranging branch of computer science that allows you to teach, to use the developmental process [similar to what] we would use to teach children [but] to teach machines. And it sometimes appears like it’s ‘intelligent,’ but it’s not.”
While ChatGPT and Midjourney seemed to magically appear, Adcock emphasizes, everyone has had an ongoing relationship with AI at least since they started trade their likeness or data to do something with an app on their device.
“Look at the media filters,” he says. “I’m going to put sunglasses on, I’m going to have a cat face, I’m going to have bunny ears, I’m going to do makeup on my phone. All these social media filters are smart filters built on AI. They’ve been built on a system that says you want to thin out your face? To elongate your chin? To change your hairline?”
All of those things can be adjusted very specifically and a lot of them look quite real. And that’s because we’ve all been teaching the systems how to do what they do. Likewise, Siri and Alexa. We are providing massive amounts of data for these systems to learn from.
Tesla’s Waymo system is also artificial intelligence, he points out, with its massive amount of data processing whenever someone uses their Tesla. “I don’t think people are considering that. A Tesla has as many as 60 cameras in it shooting 4K/60fps material that’s being processed on NVIDIA cards underneath the battery,” primarily to provide the enormous amounts of data necessary advance the brand’s self-driving capabilities.
Other recent developments with AI can be seen with the TD Ameritrade and Robinhood investment platforms, which have performed experiments that seem to have had success using AI to pick stocks. Security systems are using AI to prevent corporate hacking, for military purposes and to enhance security.
But perhaps the biggest positive developments that Adcock spoke about involved medical field: “A radiologist may have looked at 40 or 50 tests that day. All the knowledge they’ve gained from the experience of seeing these tests might be based on looking at a few thousand of these throughout their career.
“Now, think about [a system] that looks at millions of images of screenings.” The ability to aggregate, compare and analyze such a massive amount of data requires this type of technology and is expected by many in the field to be able detect more anomalies far earlier than the most skilled and experienced physician.
Tying all this back to AI for image creation, Adcock screens Coca Cola’s “Masterpiece” commercial generated using a combination of ChatGPT, Midjourney and other similar tools. Set in a crowded art gallery it is full of unusual effects, including a lot of characters from famous paintings interacting with their product.
The kinetic, elaborate spot, he says, obviously took a lot time and money to develop, but he holds it up a very strong example of what the technology is capable of and suggests what it might be capable of in short order.
“Oppenheimer” and Technology’s Ethical Consequences
From “Oppenheimer,” courtesy of Universal Pictures
TL;DR
“Oppenheimer” offers lessons on the “unintended consequences” of technology, says director Christopher Nolan.
Emerging tech including quantum computing, robotics, blockchain, VR and AI are all black boxes with potentially catastrophic consequences if we don’t build in ethical guardrails.
Leaders need to understand that developing a digital ethical risk strategy is well within their capabilities and management should not shy away.
Beware of what we create, might be the message from Oppenheimer, on the face of it a film about the invention of the atomic bomb, but with obvious parallels to today.
Director Christopher Nolan might have had in mind the nascent cold war when he began the project but since then the Russia invasion of Ukraine and the rise of AI has given his film added resonance.
“When I talk to the leading researchers in the field of AI right now, they literally refer to this right now as their Oppenheimer moment,” Nolan said during a panel discussion of physicists moderated by NBC News’s Chuck Todd. “They’re looking to his story to say, okay, what are the responsibilities for scientists developing new technologies that may have unintended consequence?
“I’m not saying that Oppenheimer’s story offers any easy answers to those questions, but at least can serve as a cautionary tale.”
Nolan explains that Oppenheimer is an attempt to understand what it must have been like for those few people in charge to have developed such extraordinary power and then to realize ultimately what they had done. The film does not pretend to offer any easy answers.
“I mean, the reality is, as a filmmaker, I don’t have to offer the answers,” he said. “I just get to ask the most interesting questions. But I do think there’s tremendous value in that if it can resonate with the audience.”
Asked by Todd what he hoped Silicon Valley might learn from the film, Nolan replied, “I think what I would want them to take away is the concept of accountability. When you innovate through technology, you have to make sure there is accountability.
“The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”
There has to be accountability, he emphasized. “We have to hold people accountable for what they do with the tools that they have.”
Nolan was making comparisons between nuclear Armageddon and AI’s potential for species extinction, but he is not alone in calling out big tech to be place the needs of society above their own greed.
In an essay for Harvard Business Review, Reid Blackman asks how we can avoid the ethical nightmares of emerging tech including blockchain, robotics, gene editing and VR.
“While generative AI has our attention right now, other technologies coming down the pike promise to be just as disruptive. Augmented and virtual reality, and too many others have the potential to reshape the world for good or ill,” he writes.
Ethical nightmares include discrimination against tens of thousands of people; tricking people into giving up all their money; misrepresenting truth to distort democracy or systematically violating people’s privacy. The environmental cost of the massive computing power required for data-driven tech is among countless other use-case-specific risks.
Reid has some suggestions as to how to approach these dilemmas — but it is up to tech firms that develop the technologies to address them.
“How do we develop, apply, and monitor them in ways that avoid worst-case scenarios? How do we design and deploy [tech] in a way that keeps people safe?”
It is not technologists, data scientists, engineers, coders, or mathematicians that need to take heed, but the business leaders who are ultimately responsible for this work, he says
“Leaders need to articulate their worst-case scenarios — their ethical nightmares — and explain how they will prevent them.”
Reid examines a few emerging tech nightmares. Quantum computers, for example, “throw gasoline on a problem we see in machine learning: the problem of unexplainable, or black box, AI.
“Essentially, in many cases, we don’t know why an AI tool makes the predictions that it does. Quantum computing makes black box models truly impenetrable.”
Today, data scientists can offer explanations of an AI’s outputs that are simplified representations of what’s actually going on. But at some point, simplification becomes distortion. And because quantum computers can process trillions of data points, boiling that process down to an explanation we can understand — while retaining confidence that the explanation is more or less true — “becomes vanishingly difficult,” Reid says.
“That leads to a litany of ethical questions: Under what conditions can we trust the outputs of a (quantum) black box model? What do we do if the system appears to be broken or is acting very strangely? Do we acquiesce to the inscrutable outputs of the machine that has proven reliable previously?”
What about an inscrutable or unaccountable blockchain? Having all of our data and money tracked on an immutable digital record is being advocated as a good thing. But what if it is not?
“Just like any other kind of management, the quality of a blockchain’s governance depends on answering a string of important questions. For example: What data belongs on the blockchain, and what doesn’t? Who decides what goes on? Who monitors? What’s the protocol if an error is found in the code of the blockchain? How are voting rights and power distributed?”
Bottom line: Bad governance in blockchain can lead to nightmare scenarios, like people losing their savings, having information about themselves disclosed against their wills, or false information loaded onto people’s asset pages that enables deception and fraud.
Ok, we get the picture. Tech out of control is bad. We should be putting pressure on the leaders of the largest tech companies to answer some hard (ethical) questions, such as:
Is using a black box model acceptable?
Is the chatbot engaging in ethically unacceptable manipulation of users?
Is the governance of this blockchain fair, reasonable, and robust?
Is this AR content appropriate for the intended audience?
Is this our organization’s responsibility or is it the user’s or the government’s?
Might this erode confidence in democracy when used or abused at scale?
Is this inhumane?
Reid insists: “These aren’t technical questions — they’re ethical, qualitative ones. They are exactly the kinds of problems that business leaders — guided by relevant subject matter experts — are charged with answering.”
It’s understandable that leaders might find this task daunting, but there’s no question that they’re the ones responsible, he argues. Most employees and consumers want organizations to have a digital ethical risk strategy.
“Leaders need to understand that developing a digital ethical risk strategy is well within their capabilities. Management should not shy away.”
But what or who is going to force them to do this? Boiling it down — do you trust Elon Musk or Mark Zuckerberg, Jeff Bezos or the less well known chief execs at Microsoft, Google, OpenAI, Nvidia and Apple — let alone developing similar tech in China or Russia — to do the right thing but us all?
AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.
The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in anxiety about AI. For the past few months, executives and AI safety researchers have been offering predictions, dubbed “P(doom),” about the probability that AI will bring about a large-scale catastrophe.
Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.
You might ask how such existential fears are supposed to play out. One famous scenario is the “paper clip maximizer” thought experiment articulated by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.
A less resource-intensive variation has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.
Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.
A paper clip-making AI runs amok is one variant of the AI apocalypse scenario.
Actual Harm
In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are overblown and misdirected.
Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic Bill Browder by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes — from high-tech heists to ordinary scams.
AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.
These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.
Not In the Same League
The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost 7 million deaths worldwide, brought on a massive and continuing mental health crisis and created economic challenges, including chronic supply chain shortages and runaway inflation.
Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also changed the calculations of national leaders on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.
AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.
Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.
What it Means to Be Human
Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
Not Dead But Diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.
The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”
The Medium, the Movie, the Message: IMAX for “Oppenheimer”
Watch: IMAX production on “Oppenheimer”
Director Christopher Nolan sought out a medium sufficiently powerful and immersive to present his historical epic Oppenheimer.
A devout believer that motion picture film is an essential component in creating a true cinematic experience, he chose to shoot the story of the scientist who oversaw the creation of nuclear weaponry in 65mm, 15-perf IMAX format. Learn more in the video above.
To clarify: That’s the same super large-gauge film stock that directors such as Quentin Tarantino (The Hateful Eight), Paul Thomas Anderson (The Master) and Stanley Kubrick (2001: A Space Odyssey) have used but with a crucial difference.
Instead of running the film stock through the camera with the width of the film perf-to-perf used to define the width of the frame and the height of the frame measuring five perfs, this format runs that stock past the lens on its “side,” with the height of the frame being defined from the perfs on the left to those on the right and the width of the frame extending to 15 perfs.
Watch: The image capture for Oppenheimer
This allows for a truly enormous frame that can be projected significantly larger and with much more fine detail than even the above-mentioned epics. It also uses approximately three times as much film between “action” and “cut” as traditional 70mm; that is in itself a prohibitively expensive medium for all but the most established filmmakers to even consider.
While most filmmakers, even diehard lovers of celluloid, would shoot the black-and-white portions of a movie in 2023 in color and create the monochrome version during the digital color grading, that simply wasn’t appropriate to Nolan.
The fact that there was no black-and-white negative stock manufactured in the 65mm format required didn’t slow him down, either, and he arranged to have Eastman Kodak manufacture their black-and-white negative stock in 65mm gauge just for this project.
Watch: The trailer for Oppenheimer
As reported by Rochester First, from Kodak’s home town, “part of the movie was shot on custom-made film by Kodak: black and white large format IMAX film.
“Nolan is one of our biggest customers,” says Commercialization Manager at Kodak, Diane Carroll Yacoby. “He engaged Kodak right from the beginning. He had several ideas he wanted to try, one of which was to capture some of the scenes in Oppenheimer in black and white large format negative, which unfortunately, we did not have available.”
The task of making the Double X negative in the larger gauge involved some serious revamping of Kodak’s manufacturing plant in order cut the rolls of emulsion and perforate it to create sprocket holes for this format for which Double X had never been sold.
“It’s so cool … bringing your friends and family [and saying] ‘We made this film, I remember when this was going through the whole process,’” enthuses Operations Manager of Film Finishing, Kristen Taglialatela in the article.
As viewers of IMAX specialty films know, the medium is a powerful way to present giant, sweeping landscapes, “but I got very curious to discover this as an intimate format,” cinematographer Hoyte Van Hoytema recalls.
When shooting in the format, he says, “The face is like a landscape. There’s a huge complexity and a huge depth to it.”
The history of the title character’s role in transforming nuclear warfare from a possibility to reality, says Nolan, “is one of the biggest stories imaginable.
“Our film tries to take you into his experience. And IMAX, for me is a portal into a level of immersion that you can’t get from any other format.”
van Hoytema told Collider that Nolan is very much dedicated shooting scenes with a single camera, the old school way.
Of course, the expense of running multiple 15-perf 65mm for every take would likely be prohibitive even with the type of budgets he gets for his movies, but it’s about more than that.
The cinematographer elaborates that the camera is like the “magic box” on set that everything is directed towards. All the action, “‘is sucked into that one little box so that one camera really becomes an epicenter of our shoots. As soon as you put two cameras on the set, that that attention gets somehow divided.'”
Additionally, the director is not one for hanging around the video village. Nolan is almost always found directly to the side of the camera during a take. “‘The actors know exactly towards where they’re working,'” the DP says, adding that this is also true for ‘”the production designers, the set dressers and us in lighting…it all has to [aim] towards that one direction.'”
The IMAX cameras, in addition to their size, present some challenges for shooting, especially when capturing the type of close, intimate compositions that make up the majority of the film. van Hoytema explains that each frame of 15-perf, 65mm stock, “‘is a huge piece of exposed film. So, 24 of those frames … pull pulled through the camera per second! You can imagine how much power and inertia and how big the motors are that you need in order to do that, and how aggressive your mechanism has to be to…stop that frame'” to achieve the intermittent motion necessary for a film camera to work.
This is why “‘that camera is so loud, and it’s so bulky. It’s just physically very heavy.'” And while there is a blimp available that will dampen the sound (likened to that of a lawnmower), the device itself a rather enormous apparatus which van Hoytema refers to as the “coffin.”
Another factor comes from the unique attributes necessary for the optics. Lenses designed to cover such a large image area will, as a rule of physics, have to have a longer focal lengths to capture the same frame as a lens designed for a traditional sized frame. For example, if a filmmaker wants to capture a head-and-shoulders shot of an actor with the camera six feet away to focus that image onto a 35mm 1.85:1, they will require a much longer lens to get the same head-and-shoulders framing with the camera at the same distance to the subject.
All else being equal, longer focal lengths have shallower depth of field to work with and longer lenses are trickier to design for very close focus than wider lenses.
As a piece in Vulture notes, “shooting close-ups in IMAX was harder, technically, than capturing the vistas of Los Alamos, N.M., where Oppenheimer oversaw the creation of the atomic bomb as part of The Manhattan Project. van Hoytema usually relied on tight 80mm lenses for close-ups, but he needed to get closer than six feet for greater intimacy. With no available lenses,” up to the task “Panavision lens specialist Dan Sasaki supplied and adapted [medium format still] Hasselblad, Panavision Sphero 65, and Panavision System 65 glass specifically for the purpose.”
Nolan wanted to avoid CGI for the film, despite the fact that some of the most powerful moments involve abstractions, such as imagery of molecules and incendiary moments of atomic bomb testing, particularly the climactic Trinity Test, which the title character and his fellow scientists set off leading to proof that their atomic bomb would *spoiler alert* both work as expected and not set the earth’s atmosphere ablaze. It’s obviously a key scene in the movie and Nolan wanted to put it together with practically acquired imagery.
“Pushing the Button:” Recreating the Trinity Test for “Oppenheimer”
An interview in American Cinematographer covers quite a bit about the specialized glass Panavision made for Oppenheimer, and takes a particularly deep dive into a pair of lenses that had to be specially made to capture the practical effects that would make up elements of that bomb test sequence. The fascinating interview with Dan Sasaki, Panavision’s vice president of Optical Engineering and Lens Strategy, goes into the creation of these custom optics.
“Initially,” Sasaki recalls, van Hoytema “really couldn’t say what he was trying to do, because any project with Chris Nolan is generally very top-secret. All he said was that he wanted a waterproof probe lens that would focus very close, to cover whatever format we could. Then we said, ‘Well, what camera?’ And he said, ‘Can you make it work for IMAX?’ We told him it was going to be a challenge, but then I remembered we built large-format probes for the airplane cockpits in [Nolan’s feature] Dunkirk. Then there was talk about photographing particles with a probe submerged in water.”
As the discussion evolved, Sasaki recalls the cinematographer doling out a bit more info and a Panavision optical expert realizing. “’Oh, you want a microscope.” And he goes, “Yeah, a wide-angle microscope for IMAX.” We came up with a proof of concept for [both the 5-perf] 65mm and IMAX cameras.
‟It had all to do with the fact that we wanted to see physics, we wanted to be within the world of atoms and, of course, we couldn’t build lenses the size of atoms!” van Hoytema explained to British Cinematographer.
They then set out to design both a 24mm and a 35mm version of this microscope lens, first tackling the 35 as they knew the 24mm would be more difficult. After more testing, Sasaki recalls, “‘Hoyte asked us for closer focus and to make sure it [so it can] can go at least nine inches below the water’s surface. Our next step was to make it waterproof and set the lens stops.
“Initially, they were testing the probe with a waterproof membrane in the side of the tank that limited the diameter of the relay, but because the depth of field was so shallow, he was working at deeper stops, which accommodated smaller glass elements and shrunk the size of the probe.'”
As with any project of such complexity, the initial prototype had some issues related to the waterproofing and other issues. Sasaki’s team also realized that the calculations they’d made for the extremely fine close focus abilities of (one of the lenses needed to focus down to roughly 1mm!) were off because the density of water itself had a small but perceptible effect on the ability to calculate precise focus. So, to address that, they built “‘an intermediate optical surface, similar to a snorkel to separate the lens itself from the water.
“Hoyte is one of those people you’ll do anything for because every project he touches is amazing,” Sasaki enthuses. “He’s also technically astute — he has his own machine shop, and he builds his own parts, so he’s very understanding of the process and gives us the lead time to do things right. He’s hands-on, so it’s not, ‘here’s what I want’ and then comes back six months later. He gets involved.”
“We created science experiments,” the DP told IndieWire. “We built aquariums with power in it. We dropped silver particles in it. We had molded metallic balloons which were lit up from the inside. We had things slamming and smashing into one another such as ping-pong balls, or just had objects spinning.
“We had long shutter speeds, short shutter speeds… negative overexposure, underexposure. It was like a giant playground for all of us,” the D.P. recalls of these practical effects shots.
van Hoytema’s enthusiasm for this format is evident in his recent interview with British Cinematographer: “IMAX is constantly innovating, and they’re constantly helping us solve problems or make those cameras better. I always compare that camera to, in a way, a Formula One car. It needs a lot of care and a lot of love. But anything that gives images like that, you wouldn’t expect less. It’s not an off-the-shelf thing that just shoots school pictures. In order to get to that very specific level, it just needs service and a very meticulous guidance.”
“I’ve seen Oppenheimer twice, in digital and 70 mm IMAX,'” he writes. “Both times, as it turns out, in the exact same auditorium. So, while I can’t speak to the full range of formats in which the movie is being exhibited—standard and 4K projection, laser and xenon IMAX, not to mention 35 mm and non-IMAX 70 mm film—I can say precisely how much of a difference seeing it in this most rarefied of formats brings to the process, and how much is just hype.
“Oppenheimer is going to look spectacular in any of them…But where most IMAX movies are solely interested in exploiting its capacity for spectacle—making the big things bigger and the loud things louder—Oppenheimer is up to something different. While the Trinity test does fill the theater floor to ceiling with a cloud of nuclear fire, the movie is largely driven by conversations.
“By blowing those conversations up so much larger than life, to the point where Cillian Murphy’s eyes are not just the color but the size of swimming pools, Nolan underlines how seemingly mundane or undramatic events, the kind movies often don’t even bother with, can have absolutely massive consequences.
“The showstopper, so to speak, is the Trinity explosion itself, but the movie is dotted with arresting images all along, some of which remain abstract or ambiguous until the closing moments: the ripples of raindrops in a pond evolve into the blast radii of nuclear detonations covering the globe; an ethereal vision of clouds that we later realize are the ghostly trails of missile launches. And that’s where the added power of the 70 mm IMAX format, which offers more than four times the resolution of the best commercial digital projection, really comes in.”