What Are the Best Technologies for Targeted Ads on Streaming TV?
Adrian Pennington
TL;DR
As the video streaming market evolves, all eyes are focused on improving monetization and delivering an outstanding quality of experience, says ad tech solutions vendor Broadpeak.
Blending media measurement and market research is one way to effectively plan for streaming ads, the company says.
In-context advertising testing can track how well ads maintain audience interest and measure ad impact on things like product awareness, consideration and purchase intent, within a streaming environment.
With almost 100% of US TV households subscribing to a streaming service, media planners must incorporate Connected TV and streaming video into their ad buying mix. A decade ago, the medium lacked advertising potential, but now many of the large streaming services are growing their ad-supported user base, and FAST platforms are gaining traction too.
“To effectively enter this arena, media planners need to take a balanced approach. Instead of just comparing it to TV, it’s important to understand its unique performance abilities and adjust plans accordingly,” advises Heather O’Shea, Chief Research Officer at marketing consultancy Alter Agents, writing in Advertising Week.
Unlike TV ads with fixed schedules, streaming allows for more flexibility in ad placement and targeting, more similar to digital media. Media planners should use this to deliver personalized messages that connect with viewers, O’Shea says.
Streaming video certainly presents exciting opportunities for advertisers, but many unknowns remain. For instance, measuring the effectiveness of streaming ads poses a unique set of hurdles, given the fragmented nature of the landscape and the lack of standardized metrics.
“Many media planners still have a lot of questions about how to best implement advertising on streaming platforms,” O’Shea says. “This medium lands somewhere in the middle between traditional TV, which has primarily been used as an awareness tool, and digital advertising which is heavily utilized as a direct response tool.”
Her focus is on gathering the data and insights planners need for defining the best approach. One tactic where Alter Agents have had “significant success,” she says, is using in-context advertising testing. This method can track how well ads maintain audience interest and measure ad impact on things like product awareness, consideration and purchase intent, within a streaming environment.
“It results in a deep set of insights that media planners and advertisers can put to work right away.”
Technical solutions to digital advertising will be represented across the exhibition and conference at NAB Show. Among vendors with product in this area is Broadpeak. It brings its targeted ad insertion solutions to the show and will demo a “disruptive” feature that boosts the performance of targeted advertising for video streaming services by allowing viewers to click on banner ads and receive a notification on their phone, guaranteeing clicks for advertisers.
“Today we are at an inflexion point where ad budgets will flow back to TV,” says Pieter Liefooghe, business development director at Broadpeak. “First of all, we are now able to bring the targeted advertising technology from digital advertising to the TV screen, by either implementing client-side ad insertion, server-side ad insertion, or a combination of both,” he explains.
“Secondly, due to how personal data for targeting purposes is collected and used in digital advertising, there are increasingly privacy concerns and laws that make this ad medium less attractive compared with targeted TV advertising.”
Broadpeak has also issued a “Guide to Ad Insertion,” including information on how digital advertising has impacted the spend of TV advertising. The Guide carries an overview of connected TV advertising, which has seen increased interest from advertisers, as well as an analysis of ad partnerships that can further increase the business case for TV service providers to implement an advertising solution.
At NAB Show 2024, the company will also showcase a new Spot2Spot feature.
“Today, most targeted ad technologies are limited to full ad break replacement, minimizing value for the targeted audience,” CEO Jacques Le Mancq explains. “With the Spot2Spot feature, content owners can replace specific spots within the ad break. Broadpeak will demo spot-level ad replacement for addressable TV and comprehensive ad tracking for both replaced and non-replaced ads.”
Alan Wolk, Co-Founder/Lead Analyst at TVREV, will be moderating the NAB Show session “The Future of FAST: Lessons Learned and What’s Next,” Tuesday, April 16 at 3 p.m.
March 18, 2024
Posted
March 14, 2024
How to Maximize Ad Revenues With SSAI
As we approach NAB Show, OTT advertising is on the industry’s mind like never before. Nearly all the major broadcasters and streaming providers have embraced some form of advertising to increase ARPU and move to business models that are sustainable over the long term.
Server-side ad insertion (SSAI) is the central cog in OTT advertising because it joins streaming technology with adtech. Any issues with the SSAI and valuable advertising revenue is lost. On the other hand, SSAI has the ability to transform advertising revenues and empower providers to compete on the digital stage.
The key to maximising SSAI revenues is to allow broadcasters/customers to create an ad product that boosts the traditional benefits of TV by adding the modern benefits of digital advertising.
TV’s traditional benefits
Mass reach: Nothing offers mass reach in a short period of time like TV. It also has the ability to drive discussion and get in the public psyche – it creates “water cooler” moments that are increasingly hard for advertisers to find elsewhere.
Quality of delivery: Nowhere else can advertisers get a broadcast-quality 15-30-second ad with such high engagement and view-through rates.
Digital advertising benefits
Addressability: In the digital realm, advertisers expect addressability. Broadcasters and streaming providers must convince brands to increase spend on TV rather than YouTube or TikTok – which both offer fantastic targeting.
Programmatic: There is great potential to increase fill-rates by adopting programmatic. It helps secure the highest possible CPM for each available ad spot.
Measurement: Real-time measurement of ad views is essential for advertisers to tweak and improve their campaigns. It’s what they do across other digital channels so they want to do the same with OTT.
SSAI has the power to deliver an appealing blend of both worlds: the mass reach and viewer experience of TV and the advanced advertising benefits of digital.
Implementing SSAI to unlock the full value of OTT advertising can be complex. Here are the key considerations for broadcasters and streaming providers to enhance their advertising offerings and grow revenues:
Scale and Reliability
TV’s mass reach creates valuable water-cooler moments that advertisers are increasingly struggling to find elsewhere.
Live streaming and major sports have mass appeal and are therefore highly valuable. But applying addressability and one-to-one measurement at scale is impossible without a dynamic prefetch extension to your SSAI. Otherwise it is highly likely that ad-decisioning servers will time out and fail to return a response.
It’s important to be real about concurrency. Concurrency means the number of viewers watching at the same time. It’s not an average over a day, or a period of play, it’s minute-by-minute.
Mass reach is not only the domain of live streaming. VOD creates water-cooler fortnights. Some shows are a must-watch. Remember how Tiger King made Joe Exotic a household name in the space of a fortnight? Even though viewers are not pressing “play” at the same time, they are doing so in a short timeframe and putting extra demand on the streaming and advertising tech.
Maximizing inventory
In live sports, many advertising opportunities are missed because they’re so challenging to access. A half-time ad break in a soccer match can be planned for. The timing is dependent on the referee’s whistle, but the duration of the ad break and session ID is known in advance.
But what makes sport so compelling are the twists and turns, in other words: the unexpected. If a World Cup soccer match goes to penalties then all of a sudden an unplanned, but incredibly valuable ad break, is created immediately before the first penalty kick. We’ve seen audiences double between the end of extra time and the start of penalties.
Dynamic prefetch with contingency ad pods is essential to capitalise on these highly engaging and valuable moments.
Campaign management
SSAI is not simply a case of switching on a tap and letting the ads flow in. It must be integrated closely with the adtech ecosystem to deliver ad operations teams the right data to manage their campaigns. In order to consistently deliver the highest fill-rates, real-time measurement of ads viewed must be surfaced within a live 24/7 campaign dashboard.
In OTT, too much advertising is measured by ads stitched. That method is not sufficient for ad ops to make informed decisions about their campaigns.
UX and complexity
SSAI delivers a consistent, seamless viewer experience across all content types and devices. It effectively replicates the experience of traditional TV. It also goes some way beyond that.
SSAI must support all kinds of UX features, from clickable ads to scrubbing. Increasingly, viewers are expecting longer DVR windows, meaning the SSAI must be able to support whatever business logic is required to maximise the potential of advertising in live-rewind mode.
As you can see, SSAI is capable of delivering a huge amount of added value to OTT advertising propositions. As the industry’s reliance on advertising revenue grows, it is increasingly important that broadcasters and streaming providers offer the best possible ad product to the market in order to deliver better value and appeal to more advertisers.
Learn from these industry experts as they navigate the world of FAST content, user engagement and monetization.
Here, Wolk takes a look at the FAST landscape.
What are the biggest trends impacting FAST right now? Alan Wolk: Three of the biggest trends we are seeing are:
Better User Experience: This includes everything from better integration between linear and on-demand, improvements to the interface, better recommendations and personalization.
A Push To Quality: Now that FASTs are growing up, they are getting rid of their lower-performing content and replacing it with more premium content. The fact that studios are now licensing these sorts of shows to the FASTs helps too. This means that the notion that anyone who owns some content can magically stand up a FAST channel and monetize it is coming to an end.
More Local Content: FASTs are adding in local news and other local programming. Most is coming from local broadcasters, the rest from streaming-only services. But overall, the demand for local programming is high.
What are the biggest challenges those working in FAST have to overcome? The notion that it’s not “real TV” is a big challenge. So is the lack of transparency around data. Advertisers feel they are not getting enough feedback on where their ads runs and rights holders also feel they’re not getting adequate viewing stats.
That needs to change if FASTs are to succeed. The other challenge is that there is not much consistency across the FAST ecosystem.
Some of the bigger players can sell fully intact channels to the FAST services with shows that run at the same time across all of them. But for most, the FASTs are curating their own channels, which makes it challenging for advertisers.
What is one thing you wish FAST professionals had a better understanding of? There is so much confusion about nomenclature. It’s as if people don’t seem to realize that there are FAST services like Pluto TV and The Roku Channel, and that those services have linear channels (aka “FAST Channels”) and on-demand programming. Some people insist on talking about the on-demand content as if it is separate or referring to “FAST Channels” as if they were somehow detached from the services they ran on.
But if you’re an advertiser, your inventory runs on both linear and on-demand—it’s not as if they sell them separately. There’s also a lack of understanding about the different types of FAST services, the idea that FASTs attached to device OEMs are very different than FASTs owned by the major media companies and the growing range of independent FASTs. And that there is also considerable overlap between all three.
What are the top three things that attendees should go hunt down on the show floor to expand what they just learned in your sessions? I would look at what the OEMs are doing with their FAST services- how are they integrating them into the main interface so that they are front and center of the viewer experience.
Talk to the independent FAST services and those attached to major media companies as well.
Finally, I would attend some sessions on advertising, since FAST is all about advertising.
What discussions should they be having with the exhibitors? Ask them about what they are doing to differentiate themselves from other FAST services. What is their competitive advantage for consumers? For advertisers? For content owners?
Both Consumers and Creators Need to Take Responsibility for AI Content
TL;DR
Adobe CEO Shantanu Narayen talks about how the company is incorporating AI and its work to tackle misinformation, urging creators to either use AI or miss out.
Narayen acknowledges the responsibility of companies like Adobe to mark the provenance of content generated by AI, but puts the onus on consumers to be more aware of the media they consume.
He doesn’t believe artists will be overtaken by AI, only that Adobe will work with AI to build new tools for creators — but then what else is he going to say?
“If you don’t learn to use AI you’re going to be in trouble,” declared Adobe chair and CEO Shantanu Narayen, who also put on the onus on the general public to learn more about AI an to question the veracity of the content they are served up as fact-based news.
He was speaking to Washington Post tech columnist Geoffrey Fowler in an illuminating exchange about how the tech vendor is seeking to balance its commercial aims with tackling misinformation.
There’s also an existential threat to Adobe itself. Won’t generative AI simply erode any business the vendor has to market its own content creation tools?
Narayen responds. “I think [AI] is actually going to make people much more productive, and it’s going to bring so many more marketers in small or medium businesses into the fold [to be able to use Adobe’s tools even more easily to create content],” he says.
“AI really is an accelerant. It’s about more affordability. And it’s about more accessibility. And Adobe is always one when we solve problems, and allow more people into the field.”
He maintains that GenAI was a good thing, on the whole, both for creators and for Adobe itself:
“It is going to be disruptive if we don’t embrace it and we don’t use it to enable us to build better products and attract a whole new set of customers. But I’m completely convinced that this will actually be an accelerant to further democratize technology, rather than just a disruption.”
Fowler asks how Adobe can convince the creatives who buy its tools that these tools — and Adobe’s AI Firefly — are not in the process of replacing them with AI.
“I’m convinced that the creatives who use AI to take whatever idea they have in their brain are going to replace people who don’t use AI,” Narayen replies.
“If people don’t learn to use it, they’re in trouble. I would tell young creators today that if you want to be in the creative field, why not equip yourself with everything that’s out there that enables you to be a better creative. Why not understand the breadth of what you can do with technology? Why not understand new mediums? Why not understand, the different places where people are going to consume your content. A knowledge of what’s out there can only be helpful rather than ignoring it.”
Keeping the creator community at the center of its brand, Adobe has opted to differentiate itself from other AI tools developers, like Stable Diffusion or OpenAI, in training Firefly on data that it owns or that creators have given permission for use.
“I think we got it right, in terms of thinking about data and in terms of creating our own foundation models and learning from it,” he says. “But most important in creating the interfaces that people have loved. I think we’ve been really appreciated by the creative community for having this differentiated approach.”
The conversation shifts to the dangers of AI, and how much of a threat AI poses to truth. Fowler notes that people have long been able to use Photoshop “to try to lie or trick” people into believing misinformation, so what’s different with GenAI?
Narayen says technology has always a had unintended consequences. “It’s an age-old problem, [but where] generative AI is different is the ease at which people can create content. The pace at which it can be created, is going to dramatically expand,” he says.
“So it’s incumbent on all of us who are creating tools, and those distributing that content, including the Post, to actually specify how that piece of art was created, to give it a history of what was happened.”
“The challenge — and the opportunity — that we have, is that this is not just a technology issue. Adobe and our partners have worked to implement credentials that identify, definitively, who created a piece of content and was AI involved and how it altered along the way. The question is, how do we as a company, an industry and a society, train consumers to want to look at that piece of content before determining whether that was real or not real,” Narayen says.
“We’re going to be flooded with more information. So it’s the training of the consumer, to want to interrogate a piece of content, and then ask Who created it? When was it created? That is the next step in that journey.”
Fowler pushes back on this, quizzing just how much onus should be on the user, or viewer, and how much responsibility should publishers or AI vendors share. He points out that Adobe was selling AI-generated images of the Israel Gaza war and that Adobe said the images were released because they were labeled as made by AI. “But is that just proof that the general public is not adequately prepared to identify generative AI images versus originals,” he said.
“The consumer is not solely responsible for all obligations associated with trying to determine whether it’s AI or not,” Narayen said.
“Certainly, distributors of that content and the creator of the content also has a role to play [but] the consumer has a role to play as well because they’re the ones who are at the end of the day consuming the content.”
He emphasizes the need for consumer education and insists that consumers take some, not all, but some responsibility for how they interpret the content they view, hear or read.
“The more a company like the Washington Post promotes this notion of content credentials, [the] education process will increase.”
Narayen also defends Adobe by saying it is not a source for news. “Adobe only offers creative content, we do not offer editorial content. And what people were doing was trying to pass off what was editorial content or actual events as creative. So we have to work and moderate [or] remove that content.”
Fowler challenges that idea of content credentials are welcome to those who view it as a good idea but it still leaves open the misuse of AI in content generation by bad actors. What can be done about them?
Narayen doesn’t really have an answer other than widening the education of the public. “The good guys are going to want to put content credentials in to identify their sources or identify what’s authentic. I think if we can continue to train other consumers to beware in terms of content [provenance] that’s one step in terms of the evolution of how we can educate people.”
He is optimistic about winning the battle. “We will get through this in a responsible way and it will both make people more productive and will make them more creative. We will respect IP, perhaps in a different way than it was done when it was just a picture, but it will happen I’m confident that. Companies and governments will work together to have the right thing happen.”
AI has a visual plagiarism problem, raising legal challenges and the urgent need for industry collaboration in ethical AI development.
March 14, 2024
Posted
March 13, 2024
NAB Show Amplified: What Content Creators Need to Know for a Volumetric Video Future
TL;DR
Video content is transitioning from 2D experiences on flat screens to immersive spatial experiences, says Supersphere founder and CEO Lucas Wilson, necessitating creators to adapt to technologies like virtual and extended reality.
The industry awaits the emergence of a “YouTube for spatial experiences,” Wilson says, a platform for sharing 3D content, expected to be developed by tech giants like Meta, Apple, or Google.
Supersphere will be introducing ArkRunr, a platform for virtual production that simplifies creating immersive environments without the high costs and complexity of current VP methods.
We are entering a world where video will no longer be captured and presented as a 2D experience on flat screens, but as spatial experiences enjoyed with virtual or extended reality head gear.
The building blocks are being put in place and creators of all kinds better get ready, says Lucas Wilson, founder and CEO of Supersphere.
“Immersive experiences matter because the user is more engaged. So the right question is not why has VR failed to take-off but in what direction is content going in? All the trend lines point to content being more immersive.”
At Supersphere, Wilson has helped transform the live performance space by creating hundreds of XR, AR, VR and MR experiences for music artists ranging from Paul McCartney to Billie Eilish and Post Malone. The visionary exec, who will be presenting “Virtual Production for Content Creators” at the 2024 NAB Show, has seen the future and says the next evolution of video and immersive experience is volumetric.
“What doesn’t exist right now — and maybe Apple will create it soon — is the YouTube of spatial experiences. We’re ready for that,” he says.
The ability for anyone to capture and share experiences in the 3D world is coming, he says.
“Meta or Apple or Google will come up with the first true spatial distribution platform. The YouTube of the spatial world. I think that’s where we will all want to live.”
Arguably commercial VR really only began in 2019, when Meta released Quest, so it shouldn’t be surprising that VR has not moved beyond being a niche industry. Yet Meta has sold 20 million of its headsets and the Vision Pro, albeit in limited run, sold out in hours.
Wilson views VR as part of a continuum of immersive experiences which has taken us in short order from analog to HD to UHD TV via stereo 3D TV. “Each tech advance is aimed at delivering a more immersive experience but while TV set engineering and content distribution has been around for 80+ years, virtual reality is only just getting started,” he says.
“Headsets are a temporary anomaly. I think most people in the industry agree with that. The real answer will be when we have VR glasses.”
Meta’s RayBans are one example. Another is being developed at Brilliant Labs. They are lighter, more comfortable, less obtrusive and, frankly, cooler.
“For a start, they won’t make people want to punch you if you’re wearing them in public,” says Wilson. “Headsets are always going to be a niche market because there are only a certain amount of people that will actually want to strap a device to their face, no matter how cool it is.”
He predicts that in a couple generations of Qualcomm chip development the electronics will be small enough to fit inside AR glasses.
“Once that happens, with VR headsets in eyeglass form, then I really believe that our fundamental world changes in terms of how we communicate,” Wilson foresees.
“Kids already live and breathe by sharing content and communicating via digital devices. It’s natural to them but they still share 2D images. In the next couple of generations [of consumer electronics] they’re going to start sharing Volumes, they’re going to start sharing spaces and environments that they can interact with each other in. Once that happens, then why would you ever share a 2D photo again?”
Supersphere is getting ahead of the curve by bringing to market a new content creation tool capable of manipulating video and virtual worlds in a native 3D space.
It is called ArkRunr, and it launches right after NAB Show in April, initially targeting virtual production.
Wilson believes there’s a huge market for VP-style content creation but without the cost and paraphernalia of conventional LED volumes, camera tracking systems, and VADs.
“Anybody who has worked in virtual production knows that it is complicated, expensive and time consuming to achieve a good outcome. Moreover, there are no tools that exist in the mid-budget to creator a range for that kind of work. So, we built our own.”
ArkRunr has in fact been used by Supersphere on lots of shows “with major artists,” so successfully, in fact, that Wilson decided to commercialize it.
Wilson calls it a Spatial Performance platform. The software ingests live video feeds (from a smartphone, for example) of an artist performing on a stage, or even their bedroom, and wraps it in a virtual environment complete with interactive lighting. The platform runs on Windows and requires the computing power of “an average gaming laptop.”
“Every musician, every creator streaming from their bedroom, their living room wants to up their game. This allows them to broadcast in custom XR, AR or VR scenarios with interactive lighting,” Wilson continues. “Another big market for us is corporate. You could imagine a virtual TEDx stage, a video presentation and dynamic lighting for a corporate keynote with high production value.”
With generative AI tools added to the mix, the ability to create digital content is going to be supercharged. Supersphere, for instance, has incorporated AI into ArkRunr to create lighting for specific musical styles.
“We are training [our algorithms] on thousands of hours of real lighting shows according to musical genre.”
Supersphere’s ambition is to be the “Live Nation of the immersive world,” says Wilson, “because we are licensing virtual representation rights for spaces that exist today and those that no longer exist.”
He elaborates, “If you want to play in the Cavern Club with the Beatles in the 1960s or with the Bee Gees in Studio 54 then we can bring them back to life. If you want to imagine the Cavern Club in a cyber-tech future, you can.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 13, 2024
Could AI Deconstruct Hollywood, Then Build It Anew for Everyone?
TL;DR
The radical production efficiency of AI is anticipated to have resounding creative implications.
On the one hand, AI will collapse the traditional content creation industry and conventional creative and technical roles dependent on it.
On the other, AI will be in the hands of literally anybody, opening new and unforeseen storytelling possibilities that could benefit diverse communities. Who could argue with that sentiment?
Filmmakers and artists are grappling with what AI means and no one can quite decide if it’s a good thing or a bad thing.
There are many apocalyptic scenarios for the film and TV industry, the most extreme of which sees the entire studio system (including even broadcast) collapsing, replaced by AI tools that can perform every function.
Yet this is also depicted as a double-edged sword we should welcome as the ultimate in democratization and infinite storytelling possibility.
This optimistic view appears just that — optimistic verging on the fingers-crossed — as experts look for a silver lining in the inevitable technology change sweeping the industry.
Perhaps we should even be making a dividing line in human history: Before AI and After AI.
As photoreal video and finessed prompt-to-text generation advances, it won’t be long before any movie or TV show, still image, painting, or novel created in the centuries of B-AI history is viewed as an outdated artifact.
More than that, the ability of AI to simulate anything could call into question how any and every work of art to date was crafted.
Even “behind the scenes” footage of humans actually crafting a film on set could be called into question. It could be faked, right?
That’s a pretty soul-destroying thought, but let’s have faith that we record and hand down the history of creation so that future generations appreciate the sweat, skills and inspiration and collaboration it took to make, say, Singin’ In The Rain or Raiders of the Lost Ark.
Hamper, however, also points out that our trust in what we see on screen has always been one of suspending disbelief. If someone is shot dead in a TV drama, we already know the actor wasn’t killed in real life.
“From reality TV narratives, to film lighting to special effects snow, you accept it. It’s all just been sorcery happening behind a screen. We have become fully locked into this fake reality,” he says.
“But at least it is human fakery,” Hamper adds, concerned that now even the skills with which humans used tools to “fake” things on screen will be completely taken over by machines.
Then he flips his own argument on its head. He believes (hopes?) that humans will still be essential for the creative process, “at least for now.”
“The one thing I encourage all creatives to think about is not to how to cut ahead of the curve of AI, not how to monetize it or clamber on the bandwagon, but to stop and think how these tools can help tell stories that have not been possible.”
The death of trust “may not be a bad thing,” he says if we can use AI to conjure stories that helps humanity connect with one other and the world around us in ways that have not been possible before.”
Before we call time on the content creation ecosystem, let’s take another perspective. The stock footage industry, for instance, is reckoned by almost every pundit to be virtually wiped out, and soon.
This sector of the industry was predicted to be valued at $7 billion by 2027, according to research firm Arrington. That was in 2022. Since then, market leader Shutterstock has partnered with an AI developer to grow its image library with AI stills and video.
“The underlying business model of an industry that was supposed to near $8 billion in just a few years, is essentially wiped out in the medium term,” says a review of AI’s impact by Synapse.
Think again.
“The idea of going to these sites and purchasing 10 seconds of footage will fade. But high quality data is the only way OpenAI or any competitor will be able to create a usable model. It essentially shifts every B2C stock site to a B2B video supplier. OpenAI may also enlist an army of stock filmmakers to collect certain scenarios that are missing from the model.”
What about VFX? Surely another industry that will be upended by AI. Won’t the $400 billion animation industry dominated by like Disney and Netflix “see massive disruption as the technological moat drops significantly?”
Maybe. Or maybe the money that went to a few (studios) will now be shifted. It stands to reason that one group to gain will be those supplying the underlying tech, thinks Synapse. Not necessarily the AI tool developers, but the makers of computer processors required to power the data crunch. (Could NIVIDIA CEO Jen-Hsun Huang become the richest man on the planet?)
The rest of the pie could go to creators hitherto largely cut out of the greatest rewards.
“The industry risks being over reliant on AI video models to serve their customers by making [content] more similar to wrappers than the foundations that help builders create” says Synapse.
“Think of it this way. Rather than an entire team of animators, VFX, lighting specialists and more, an individual with a story to share, we’ll be able to create and distribute a story at high speed and efficient cost. Creation of new worlds in the gaming and VR space will be streamlined and available to the individual creator.”
Others also see this upside in the evisceration of the traditional content creation industry model.
Chris Wells is a content marketer, but his words appear on behalf of Lightworks, the editing system favored by Thelma Schoonmaker, among others.
In an essay written for the Lightworks blog, he endorses the optimistic outcome of AI even as it destroys jobs. Think of it as a phoenix from the flames.
“Aspiring filmmakers will no longer need expensive equipment and large teams to bring their ideas to life. Instead, all that will be needed is an internet connection and an idea to manifest all the rich, cinematic scenes one’s future auteur heart could desire.”
It’s a good thing, if you follow this line of thinking.
“Directors will be able to rapidly turn their visions into footage, learning from results and refining iterative drafts in a fast feedback loop previously impossible in such a visual medium. Entire short films could be brainstormed, drafted, revised, and finalized in days rather than months or years,” Wells continues.
“Filmmakers will also gain the flexibility to experiment with a wide range of styles and narrative directions, unencumbered by the practical constraints of traditional filmmaking. By streamlining the technical aspects of production through AI, Sora will liberate creators to focus purely on their directorial craft.”
What’s more, he contends, with a tool as powerful as AI in the hands of anybody, previous barriers for women, people of color, or disability will fall away. Who could possibly argue with that utopia?
“These instant video creation capabilities could place indie artists and major studios on equal footing like never before,” Wells writes. “Aspiring directors might no longer need to struggle to raise funds or await permission for the ‘right’ location. Their visions could spring to life at their fingertips. Lowering the barriers of entry through technology may lead to an exponential growth of new filmmaking talent from underrepresented communities.
“By making professional filmmaking radically accessible, Sora has the potential to promote empowerment and self-actualization for all.”
You can’t argue with its statement: “Whether we like it or not, we are forcibly standing on the precipice of a new era in technological innovation,” but you might take issue with the hope — for that’s what it is — that humans remain at the center of the creative process.
Lightworks wants to preserve “the human element in the AI Age,” says Wells.
“While Sora promises creators radical new capabilities for magical instantaneous video generation, the essence of videomaking remains profoundly human.”
Perhaps resistance is futile. While AI pushes the boundaries for experimenting with stylistic techniques once deemed practically impossible, “filmmakers must lead in establishing best practices for AI tools to expand creative possibility without overtaking human artistry or ethics.”
Attention is turning to how generative AI will be not just used in production but how it will transform every aspect of storytelling.
March 11, 2024
NAB Show Amplified: Generative AI’s Impact on Content Production
Captivating narratives, stunning visuals and life-like character interaction: Artificial intelligence (AI) is disrupting traditional content creation and delivery.
Dr. Li is the CEO and co-founder of Pinscreen, a Los Angeles-based startup that builds advanced AI-driven virtual avatars, as well as associate professor of computer vision at Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi.
In his presentation, Dr. Li will peel back the layers of generative AI for production – including the latest advancements in AI lip sync technology, face swap and de-aging, as well as the future potential of AI technology.
Dr. Li will present “Generative AI for Content Production: From Storytelling to Visual Effects, AI Lip Sync, and Beyond” on Saturday, April 13 at 10 a.m. at the NAB BEIT Conference Opening Session. Register here to attend.
“Dr. Li’s insights into how AI is shaping the future of content delivery will be invaluable to conference attendees,” says John Clark, senior vice president, NAB Emerging Technology and executive director, PILOT.
“This high-energy conference kickoff lays the groundwork for over 70 presentations and panels that will explore the rapidly changing media technology landscape.”
The NAB BEIT Conference will focus on the future of content delivery, next-generation systems and the opportunities and challenges at hand throughout its 70+ sessions. The conference’s forward-looking focus is designed for broadcast engineers and technicians, media technology managers, broadcast equipment manufacturers and R&D engineers. Key sessions include:
ATSC 3.0 Topics: Presentations include a look at challenges, new technology and research and implementation and differences of NextGen TV.
Cybersecurity for Broadcasters: Explore strategies for protecting media assets and maintaining secure connectivity in live distributed production, discuss the convergence of artificial intelligence, cybersecurity and broadcasting as technological milestones and take a look at the fight against piracy.
Generative AI for Media: These presentations include recent advancements in transcription, translation and re-voicing, their ethical implications for media editing, the application of generative AI to leagues and media organizations and using AI to manage title versions and achieve global distribution requirements.
Emerging Technologies in Media Delivery: Delve into boosting multilingual broadcasting with AI/ML, monitoring enhancement strategies for elevated global content integrity and examining how private 5G networks challenge traditional Wi-Fi and public 5G for video streaming and wireless connectivity dominance.
Panels on AI in Media: These sessions look at the groundwork being laid to enable AI to perform basic interoperability between services and vendors, as well as how AI can offer solutions to multilingual broadcasting.
Radio Topics covering Visual Content, Remote Audio Operations and the Air Chain: These broadcast radio-focused presentations discuss the adoption of enhanced screens in cars increasing listener engagement and new revenue opportunities, exploring how latency can be divided into layers in light of the significant delays brought by remote and cloud production, and the challenges with AM and FM radio that impact not only audio quality but PPM encodability.
Navigating the Creator Economy: AI Video Generators for Social Content
TL;DR
AI tools designed to improve and speed your marketing communications are legion. Here are a few of the latest video generators powered by AI.
Users can create videos for a wide variety of use cases with these tools. This includes generating educational videos, explainers, product demos and social media content.
These tools all work in similar ways, so it’s a question of try before you buy (or before you publish).
Video content is a must have for businesses and content creators wanting to compete. At the same time, it has never been more accessible to create professional looking video content by using AI to do most of the work. Here, we identify some of the more popular AI video generators targeting marketers or anyone in business looking to create everything from HR training videos to YouTube clips, highlight reels, voiceovers or targeted marketing content to be published online and social networks, as YouTube videos, Tiktok Reels or video ads.
They pretty much all share some common denominators. They don’t require much if any prior experience of editing or design. Many are browser-based, meaning they can accessed from anywhere via an app. Most work with just a few clicks requiring the upload of some raw content (a blog post for example) and user choice of voice and “avatar” to personalize the video. The output of shortform content complete with background music, graphics and templates is done in a few minutes.
However, as SproutVideo’s Conner Carey points out, the videos they generate leave significant room for improvement. “These shortcomings make them most effective at creating faceless videos with voiceovers, such as for FAQs and blog post summaries,” he notes.
Currently, all of these tools produce about the same quality of AI-generated video. But what differentiates the good from the bad (and the ugly) is how easy the platform allows you to edit the video, adding your footage, scenes, music, and more.
Most reviewers advise trying a few (most offer free trials) to ascertain ease of use and results.
Here’s a pick of 10 AI video generators for marketers available to use today, leaning on the selections of Alex McFarland at Unite.ai.
With Pictory you start by providing a script or article, which will serve as the base for video content. For example, turn your blog post into a video to be used for social media or your website.
“This is a great feature for personal bloggers and companies looking to increase engagement and quality,” says McFarland. “It’s simple to use and takes just minutes before delivering professional results that help you grow your audience and build your brand.”
Another feature of Pictory, for those looking to create trailers or share short clips on social media, is that you can create shareable highlight reels. And you can also automatically caption and summarize videos.
Synthesys relies on text-to-video technology to transform scripts into dynamic media presentations. Creators and companies can use Synthesys to create videos with lip-syncing AI video technology. All you have to do is choose an avatar and type your script in one of 140+ languages, and the tool will do the rest.
The software offers 69 real “Humatars,” and a voicebank of 254 unique styles. It also offers full customization, an “easy-to-use” interface for editing and rendering, and high-resolution output. Again, it is aimed at marketers or creators wanting to generate explainer videos and product tutorials in minutes.
But it’s not to be confused with Synthesia, another platform targeting brands that also enables users to quickly create videos with one of 70 AI avatars. Besides the preset avatars, you can also create your own. Synthesia claims to be used by some of the world’s biggest names like Google, Nike, Reuters and BBC.
McFarland notes that Synthesia’s AI voice generation platform “makes it easy to get consistent and professional voiceovers, which can be easily edited with the click of a button.” These voiceovers also include closed captions. Once you have an avatar and voiceover, you can produce quality videos in minutes with more than 50 pre-designed templates.
If you’re looking for a more powerful AI to generate marketing and explainer videos, InVideo might be the one. It doesn’t require any background in video creation or video editing, either. All you have to do is input your text, select the best template or customize your own, and download the finished video. The video content can then be shared directly to social media. InVideo says its users develop promo videos, presentations, video testimonials and slideshows.
HeyGen claims to make video creation “as easy as making PowerPoints.” Once again, the process is to record and upload your real voice to create a personalized avatar, or simply type in the text that you want. There’s a wide range of voices with more than 300+ to choose from. There are multiple customizations available including combining several scenes into one video and, of course, adding music that matches the theme of the video.
The Deepbrain AI tool offers the ability to create AI-generated videos using basic text. Simply prepare your script and use the text-to-speech feature to receive your first AI video in five minutes or less.
VEED also makes it easy to transcribe your video files in one click. All you have to do is upload your video, click “Auto Transcribe,” and download the transcript. With its free video editing app, you can work on creating content right in your browser.
Fliki apparently makes creating videos as simple as writing with its script-based editor. To McFarlane, Fliki stands out from other tools because it combines text-to-video AI and text-to-speech AI capabilities to give an all-in-one platform for content creation. It features more than 2,000 text-to-speech voices across 75+ languages.
The Colossyan video generator enables users to choose from a diverse range of avatars and provide the avatar with a script. After your first video is generated, you can then target different regions by auto-translating your whole video with the touch of a button. You can easily change accents and clothing and choose from upwards of 120 languages.
Elai.io users generate video from the link to an article or a blogpost in just three clicks. You first copy and paste a blog post URL or HTML text before choosing one of the templates from the library. All that’s left to do is review the video, make any changes, and render and download it. There are over 60 languages available and more than 25 avatars to choose from. Besides selecting a presenter from the library, you can also request a personal avatar.
Biteable is an AI video assistant and in-browser editing suite that helps create simple, templated videos from script to edit with just one prompt. You choose the video type (explainer, promo, how-to, etc.), the format (landscape, vertical or square), and the visual style from a variety of options and, of course, enter a descriptive prompt. It generates a slideshow-style video complete with AI-generated script, stock video, images, and royalty-free background music.
While it may not win any awards for cinematography, “it’s incredibly useful for creating quick social videos or promoting product updates, rates Vidyard.
Vidyo.ai uses AI to create Reels, TikToks, and YouTube Shorts from long-form video content. Once you upload a video or insert a URL, the platform takes a few minutes to produce a handful of potential short video content with captions. Munch is a comparable tool spotted by Vidyard, “albeit with slightly less impressive results. However, some users may find the customization functionalities easier to use.
A room-size computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the US Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.
The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past — and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.
The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.
Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.
It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language — a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.
But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.
The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.
Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.
Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”
But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.
For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm — unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.
Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.
Lessons to Heed
In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.
The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.
A new report from SMPTE provides background to media professionals on how artificial intelligence and machine learning are being used for production, distribution and consumption.
The report explores ethical implications around AI systems and considers the need for datasets to help facilitate research and development of future media-related applications.
“Bias is the model-killer,” SMPTE contends. “Black box algorithms help no one. Intellectual and cultural diversity is critical to high performance. Product teams must broaden their ecosystem view.”
SMPTE has called on the Media & Entertainment industry to be more active and vocal in the in debate about developing ethical AI systems. Doing nothing, or not doing enough, is not an option because “failure may come at a high human cost,” the organization says.
“The time to discuss ethical considerations in AI is now, while the field is still nascent, teams are being built, products roadmapped, and decisions finalized. AI development is no longer just a technical issue, it is increasingly becoming a risk factor.”
This call for action forms a substantial part of the “SMPTE Engineering Report: Artificial Intelligence and Media,” which was produced alongside the European Broadcasting Union (EBU) and the Entertainment Technology Center (ETC). The report was the result of a task force on AI standards in media that began in 2020. Since then, it has become clearer to everyone that AI will transform the media industry from pre-production through distribution and consumption.
“I believe that AI will continue to see exponential growth and adoption throughout 2024,” said SMPTE President Renard T. Jenkins. “Therefore, it is imperative that we examine the overall impact that this technology can have in our industry. That is why the progressive thought leadership presented in this document is so important for us all.”
The report begins with a technical understanding of AI and machine learning, followed by the impact these technologies will likely have on the media landscape. The report then moves on to examine AI ethics and ends by discussing the role that standards can play in the technologies future.
The report describes today’s AI as “disruptive, vague, complex and experimental” — all at once. “It is difficult to understand, and easy to load up with fears and fantasies,” the report reads.
“This is a dangerous combination. The convergence of corporate hype, fledgling methods, biased datasets, and the urgency to productize, are all fertile grounds for failure,” it continues.
“Learning through failure is generally good way of testing and improving tentative tech like AI — except when models are put in a position to make decisions about policing, hiring, synthetic conversations, or even content recommendation and personalization.
“Then, failure may come at a high human cost.”
Organizations must examine the downside risk of deploying underperforming and unethical AI systems, especially because, in most cases, ethical and technical requirements are the same.
“For example, unseen bias is as bad for model performance as it isdiscriminatory. Model transparency is not just an ethical consideration: it is a trust-building instrument.”
SMPTE urges the M&E industry to bring its own voice “and nearly 150 years of success marrying human and technological genius” to the debate.
“Media holds a substantial and powerful place in our society as the mass distributor of human narratives and social norms. Media must bring this unique voice and hybrid human/machine culture to AI development and the debate on AI ethics.”
The report explains how Media & Entertainment companies collect and process large amounts of consumer data and that increasingly, this means they must comply with a growing list of legal regimes and data governance requirements. Similarly, there’s a substantial opportunity to use computer vision in virtual production and post-production processes.
SMPTE suggests that the quality and diversity of training sets — “how color correction can affect representation of minorities” — and the use of deepfake technology are “critical areas” where ethical considerations are paramount.
The media industry’s history of sophisticated legal practice around likeness rights, royalties, residuals, and participations is a “substantial advantage in navigating issues related to computational derivatives of image and content,” it writes.
The paper argues for a standards-based approach to verification and identification, and not only of the image (e.g., format and technical metadata), but also of the talent itself and the authenticity of content.
“Persistent, interoperable, and unique identifiers have aided media supply-chains in the past, and could well help with the labeling and automating the provenance of authentic talent in the future age of AI in M&E,” it states. Such work is ongoing, including at the Coalition for Content Provenance and Authenticity (C2PA).
“At a minimum, requirements for data and model transparency would go a long way towards reinforcing trust in computational methods and help convert those in the industry still reluctant to use statistical learning to optimize human processes.”
Around the corner, the development of conversational agents (chatbots)creates serious ethical risks, especially as the industry looks to create highly immersive and personalized experiences in the multiverse.
“Bias is the model-killer,” SMPTE contends. “Black box algorithms help no one. Intellectual and cultural diversity is critical to high performance. Product teams must broaden their ecosystem view.”
There’s a call for ethical considerations to be embedded in all aspects of digital product design, and development. This seeding of ethics at the product level is essential to view bias as a complex ecosystem of inputs, features, models, outputs and outcomes, it says.
“Any organization’s output, products, and decisions (deliberate or not) inherently fit its culture and values. This is why AI ethics is high-stakes: it deploys an organization’s culture and values on a large scale,” the report argues.
“Because they shape society at scale and have a history of taking the public interest seriously, media companies have a distinct responsibility to move forward with their AI ambitions, with full awareness of these applications’ ethical considerations. They should ensure that all aspects of their development (including data collection), deployment, and end-uses, support the law as well as their own values regarding privacy, justice, tolerance, and human rights.”
The AI Ethics Pipeline
The entire value chain of AI development, from product design to data collection to model deployment, should be secure, transparent, explainable and auditable, says SMPTE.
In contrast, black box machine learning frameworks are “ethically and statistically dicey. They foster sloppiness in data science teams and mistrust for those already suspicious of machine models. What cannot be explained should not be deployed in a decision-making environment.”
The report continues: “In a world where organizations are often too suspicious or too enthusiastic, only secure, transparent, explainable, and auditable machine models can scale resiliently. Additionally, all stakeholders deserve transparency, each in their own language, across different points of view and technical sophistication.”
Ethics, it says, should be part of Quality Assurance for any and all computational systems.
“AI is still a technical ungoverned frontier. Everything around it, from roadmapping to modeling to seeding in company culture, is complex and challenging. Mistakes will happen. Organizations must communicate comprehensively and with humility about their journey to approach and implement processes around ethical AI, for the benefit of all.”
With technical standardization of AI still in its infancy there’s an imperative on the Media industry to provide language and frameworks to support its development, SMPTE urges.
“AI is an emerging technology, and AI ethics is an almost entirely blank slate. Examples of successful, organization-wide implementation of ML transparency and trustworthiness are extremely rare.”
But this should be motivation to try harder, SMPTE says. “Transparency is not just key: it is a perennial concern.”
The report warns of “model drift” where as the world changes, the problem changes, the data changes, and model performance is affected. “There is no longer a fit between the model and the system, or behavior that it is representing. Only transparent and auditable models can catch model drift before it causes damage.”
AI has a visual plagiarism problem, raising legal challenges and the urgent need for industry collaboration in ethical AI development.
May 1, 2024
Posted
March 10, 2024
How Brands Can Better Connect With Creators and Influencers (and Understand the Difference)
TL;DR
The creator economy represents a $250 billion revenue opportunity today, but brands need to adopt a nuanced approach to engaging creators to their cause.
Understanding a creator’s long-term goals, and helping them reach those goals, is not just beneficial for the creator, but also can drive future value for the brand.
A new Deloitte survey explains what differentiates content creators from influencers and what drives consumer trust in creator-brand endorsements.
The creator economy could approach half-a-trillion dollars by 2027, according to Goldman Sachs, but many brands could fall short of tapping into its full potential.
Deloitte, in its “Creator Economy in 3D” report, says that’s because brands are approaching the creator economy with influencer marketing strategies rather that ones tailored to creators. Deloitte’s study surveyed more than 2,000 consumers, more than 500 creators, and 500 brands.
“While influencers are adept at providing brand exposure across a wide assortment of audiences — content creators often help brands penetrate deeper into niche communities and bridge a more personal connection with their audience,” it states.
The management consultancy outlines how brands use “creator” and “influencer” interchangeably — and with reason: Both produce content, act primarily on social media channels, and seek to monetize their work.
“But content creators tend to be better suited for specific objectives, have a deeper depth of audience penetration, and appeal to consumers on a different level than influencers.”
The report outlines these differences in detail: For instance, content creators are said to drive deeper relevance within niche communities and sub-communities whereas influencers tend to drive broader relevance across diverse range of audiences.
To get the most out of their creator partnerships, Deloitte advises brand managers to understand the underlying drivers behind the consumer-creator bond and how this translates to brand trust.
For instance, its survey found certain creators with more influence than others. The average consumer has five favored content creators — which Deloitte defines as creators that consumers actively seek out for new updates and content, as opposed to creators that consumers engage passively on their feed.
“These creators are the social media equivalent of a favorite TV show, perhaps with less regular schedules. This trend also appears to be increasing with each generation, with the median Gen Z social media user surveyed having 10 of these favored creators.”
Recognizing, supporting, and facilitating the creator is also a key element in strengthening a creator-brand partnership. Brands are encouraged to think of it “somewhat like a business-to-business partnership.” In other words, brands succeed when their creator partners succeed.
Most brands surveyed provide creators with assistance on the creative process, content development, networking, and brand management. Some offer financial and wealth management tools and services to their creator partners.
Paying creators “competitive, fair, and equitable market value,” and paying them on time, are also emphasized.
“Understanding a creator’s long-term goals, and helping them reach those goals, is not just beneficial for the creator, but also can drive future value for the brand.”
Deloitte concludes: The creator economy operates effectively in how it reflects and shapes culture — which fundamentally runs based on the networks of shared values, experiences, and interests of groups of people. Knowing this, successful brands acknowledge the need to build networks of multiple creators to maximize their reach and impact across the various niches within a particular audience.
Influencer marketing strategist and creator economy advisor Lindsey Gamble discusses key trends and his predictions for the future.
March 9, 2024
What Happens If (When) You Open AI’s Black Box?
TL;DR
To understand the strengths and limitations of artificial intelligence we may need to adopt a new perspective, argues Jaron Lanier, a tech guru who currently works for Microsoft.
He suggests that even using words like “intelligence” and “training” don’t provide us with information about the technology’s weaknesses.
Lanier’s essay coincides with Elon Musk’s apparent irony-free bid to sue OpenAI for putting profit before humanity.
Want to learn more about how artificial intelligence is impacting M&E?
“If we can’t understand how a technology works, we risk succumbing to magical thinking,” says Jaron Lanier in the tech guru’s latest contribution to the debate on AI.
“Is there a way to explain AI that isn’t in terms suggesting human obsolescence or replacement? If we can talk about our technology in a different way, maybe a better path to bringing it into society will appear.”
This is the week in which Elon Musk filed a lawsuit against OpenAI for putting profit before the good of humanity in proceeding full steam to develop next-level Artificial General Intelligence.
Since Microsoft has invested heavily in OpenAI to do that, Lanier’s intervention could be seen as an attack on Musk for raising the ‘threat’ levels of AI but Lanier is too smart and liberal an operator for that.
He also wants all AI leaders to open the black box and show us what is inside.
Lanier attempts to this in his essay published in The New Yorker. He is alarmed by the fever pitched discussion of AI where the loudest voices appear to be at its extremes. Those doomsayers fearful of the technology’s inevitable human apocalypse and those who think that humans will always be masters of their own destiny evolving with AI as a force for overall good.
Some hold both positions.
“I have trouble understanding why some of my colleagues say that what they are doing might lead to human extinction, and yet argue that it is still worth doing,” Lanier writes. “It is hard to comprehend this way of talking without wondering whether AI is becoming a new kind of religion.”
Lanier advocates a third way, a middle way, which he hopes offers an alternative to the view that AI does nothing but regurgitate — while also communicating skepticism about whether AI will become a transcendent, unlimited form of intelligence.
He thinks that we should start by demystifying what AI is.
“We usually prefer to treat AI systems as giant impenetrable continuities. Perhaps, to some degree, there’s a resistance to demystifying what we do because we want to approach it mystically,” he argues.
He continues, “One problem with the usual anthropomorphic narratives about AI is that they don’t nurture our intuitions about its weaknesses. As a result, our discussions about the technology tend to involve confrontations between extremes: there are enthusiasts who think that we’re building a cosmically big brain that will solve all our problems or wipe us out, and skeptics who don’t see much value in AI.”
He takes issue with the term “artificial intelligence,” suggesting it permeates the idea that we are making new creatures instead of new tools. “This notion is furthered by biological terms like ‘neurons’ and ‘neural networks,’ and by anthropomorphizing ones like ‘learning’ or ‘training,’ which computer scientists use all the time.”
It’s also a problem that “AI” has no fixed definition.
“It’s always possible to dismiss any specific commentary about AI for not addressing some other potential definition of it,” he says.
The lack of mooring for the term coincides with a “metaphysical sensibility” according to which the human framework will soon be transcended.
In an earlier essay he discussed reconsidering AI as a form of human collaboration. Here he deconstructs how AI works for the layman.
“Most non-technical people can comprehend a thorny abstraction better once it’s been broken into concrete pieces you can tell stories about, but that can be a hard sell in the computer-science world,” he says.
The science-fiction writer Arthur C. Clarke famously stated that a sufficiently advanced technology is indistinguishable from magic. Lanier says that is only true if that technology is not explained well enough.
He adds, “It is the responsibility of technologists to make sure their offerings are not taken as magic.”
Technologist Jaron Lanier argues that reconceiving AI as a social collaboration opens up new strategies for long-term economics and safety.
March 6, 2024
How Sun Serves as the Color-Killer in “Dune: Part Two”
TL;DR
Playing with physics and light for “Dune: Part Two” gave the homeland of the villainous Giedi Prime a startling black-and-white look.
For director Denis Villeneuve, an environment that would breed the Harkonnen’s fascist culture is a planet where the sun is so bright and blinding that all color is washed out.
DP Grieg Fraser shot the scene with infrared imagery, although this caused complications for costume design.
The film is one of a series of recent projects that have shot using the technique, including “True Detective: North Country” and “The Zone of Interest.”
In a film awash with frames of retina-burning golden intensity the striking monochromatic scene of the gladiator fight introducing the psychopathic Feyd-Rautha (Austin Butler) stands out.
Dune: Part Two director Denis Villeneuve wanted the aesthetic of the evil Harkonnen to signify the polar opposite of the sunlit faith of the desert dwelling Fremen.
Dune author Frank Herbert had never established much information about the Harkonnen homeworld, called Giedi Prime, other than that it had been industrialized into an almost complete wasteland.
“I love how Frank Herbert shows how the psyche of the tribes of the people are influenced by the landscape,” Villeneuve told Susana Polo at Polygon. “If you want to learn about the Fremen, you just have to learn more about the desert and it will give you insight about their way of thinking, their way of seeing their world, about their culture, about their beliefs, about their religion.”
But with far fewer Harkonnen details to work with, Villeneuve was forced to improvise, and like any filmmaker, he settled on using light to tell the story — specifically the light from Giedi Prime’s sun.
“I wanted to find something that had the same evocative power and the same cinematic power for the Harkonnens,” he said. “I wanted to be generous with their world and make sure that it will be singular, and it will inform us about where their political system is coming from; where their sensitivity, their aesthetic, their relationship with nature is coming from.”
In an interview with Hoai-Tran Bui for Inverse, he added, “The idea that the sunlight, instead of revealing colors, will kill colors; that their own world will be seen in a daylight as a bleak black-and-white world, will tell us a lot about their psychology.”
He took the idea to Australian cinematographer Greig Fraser, who won an Oscar for his work on Part One, and Fraser suggested filming the scenes using Infrared photography.
The DP had used the technique on 2012’s Zero Dark Thirty and 2016’s Rogue One: A Star Wars Story. “It’s the same light the security camera uses, and you don’t see it. So, my fascination with infrared started because our eyes can’t see it, but the camera can,” Fraser told Jazz Tangcay at Variety.
Fraser shot the Giedi Prime scenes on an Alexa LF, modified so it could only see infrared and not any visible light. Since the sun emit infrared (creating life on this planet) it felt like a suitable creative solution to depicting the life-sucking environment destroying Harkonnen.
The effect is an eerie, translucent, effect on human skin, aided by the fact that the planet’s population is bald. But by creating the in-universe rule that the sun was washing out the colors, the filmmakers created other challenges. One was, what happens when characters step from the shadows into the sun?
“We needed to come up with rules for what the sun does,” Fraser told Inverse. “Our rules were effectively everything that the sun hits is washed out. So it’s direct sun and it’s bounced sun.”
When inside or in the shade, characters are lit by artificial light, Fraser explained. To achieve transitions, such as when Léa Seydoux’s Lady Margot Fenring emerges from the shade into the sun during the gladiator fight, Fraser had to shoot on a 3D stereo rig. One camera filmed as normal to a full color sensor, the other aligned on the rig, shot infrared imagery.
“We made sure we had lights that put out infrared for the infrared camera, and we had lights that the infrared camera couldn’t see, which were LEDs that put out visible light but don’t have infrared light. We had to have two different types of light sources on set that each camera could see separately and see differently.”
Another challenge was that when they started to shoot the photography showed up some of the costume fabrics — which were black in daylight — but appeared white under Infrared.
Fraser says he didn’t know why certain fabrics worked, telling Variety, “I’m sure there’s a rhyme and reason, from a material standpoint. I just know we had to do a lot of camera testing to make sure everyone was dressed in black.”
The scene, which is a birthday celebration-cum-Nazi rally, also features strange ink-blot fireworks. Villeneuve told Fraser, “They’re like anti-fireworks. They suck the light out as opposed to putting the light in.” Also, to Inverse, Fraser adds, “We worked pretty hard at trying to achieve that goal, this kind of anti-explosion type of light.”
Fraser elaborated on the decision to shoot infrared in an interview for the ARRI Rental website.
“We’d been on this planet for night interiors in part one, but we’d never been outside, so we were discussing what it would look like. I did a test for Denis where the inhabitants have very pale white skin, based on the notion that there’s no visible light from the sun on Giedi Prime, only infrared light. When the characters go from inside to outside, they effectively go from normal light to infrared light,” he detailed.
“On Rogue One, ARRI Rental modified some ALEXA 65s to do exactly the same thing, and we used them as VFX cameras, lighting parts of the set with IR light that didn’t affect the main image,” he added. “We just took that a step further and used them as our main cameras for Giedi Prime. They literally only record the infrared that bounces off skin or clothes, so colors are rendered as different tones and something that looks black to the eye might look white to the camera. It meant that we had to have exterior and interior versions of the same costume for some characters.”
It’s worth noting that infrared shooting techniques are in vogue just now. Hoyte van Hoytema used the 3D rig technique to capture eerie sequences for Jordan Peele’s Nope, and this inspired Florian Hoffmeister to go further and shoot extensive night exteriors for the unsettling Alaska set murder mystery True Detective: North Country (also on paired Alexas, with one camera modified without a color filter and with infrared lights).
Most notably, Lucas Żal shot infrared sequences for The Zone of Interest, although here the rest of the picture is so bleak that these scenes represent hope amid the darkness.
Cinematographer Greig Fraser employs the ARRI Alexa LF large-format digital camera for his collaboration with Denis Villeneuve on “Dune.”
March 6, 2024
“Feud: Capote vs. The Swans” Goes “Behind the Scenes” of the Black and White Ball With an Imagined Documentary
TL;DR
The third episode in Season 2 of FX anthology “Feud: Capote vs. The Swans” travels back to 1966 for Truman Capote’s High Society New York ball, which is recreated in the style of a documentary that was never actually shot.
Director Gus Van Sant shoots in thestyle of Albert and David Maysles and other mid-1960s documentarians who practiced theDirect Cinema aesthetic: black-and-white, handheld, favoring immediacy and reportage over gloss and precision.
The Maysles did spend time with Capote in 1966, filming documentary short “With Love From Truman,” but it had nothing to do with the ball.
Creating a faux-documentary gave Van Sant the freedom to run around with a handheld camera, and allowed viewers to see the Swans’ many layers of masks.
The third episode of Season 2 of FX anthology series Feud: Capote vs. The Swans travels back to 1966 for Truman Capote’s “best party ever.”
“Masquerade 1966” relives the legendary Black and White Ball hosted by the infamous writer at New York City’s Plaza Hotel — a lavish event boasting a guest list that included everyone from Frank Sinatra and Andy Warhol to Lauren Bacall, Ben Bradlee, the Kennedys, the Agnellis, the Vanderbilts, and the Astors. “As spectacular a group as has ever been assembled for a private party in New York,” according to The New York Times.
Director Gus Van Sant and showrunner Ryan Murphy present the hour-long episode as a black-and-white documentary of the party and Capote’s (Tom Hollander) weeks of preparations for his big night.
At its heart, it’s a flashback episode, with the Swans — Babe Paley (Naomi Watts), Slim Keith (Diane Lane), and Lee Radziwill (Calista Flockhart) — seen in various states of anxious planning. Creating even more drama, two of the high society Swans are under the impression that they would be the event’s “guest of honor.”
The documentarians catching this all, though rarely glimpsed, are depicted as real-life filmmakers Albert and David Maysles. But no such Maysles documentary was ever shot, let alone released. “It was an invention of Ryan [Murphy’s] to pretend like Truman hired somebody to shoot the ball, and then decide to not to go through with it at the end,” Van Sant tells The Hollywood Reporter. “So that was our concept, and our footage that we shot was supposedly their unused footage.”
As THR’s Mikey O’Connell points out, there is a seed of truth here. The Maysles did spend time with Capote in 1966, filming documentary short With Love From Truman. It just had nothing to do with the ball.
“That was an invention,” Van Sant confirms to Joy Press at Vanity Fair. He did watch footage from the short film the Maysles shot of Capote when he was younger, but creating a faux-documentary gave Van Sant the freedom to run around with a handheld camera, and allowed viewers to see the Swans’ many layers of masks.
But though this peek behind the scenes is imagined, “it feels oddly real—like watching never-before-seen footage unearthed from an archive,” according to Coleman Spilde of The Daily Beast. “The episode is a fine example of how to meld past and present, fiction and reality, for something unique.”
Van Sant explains the aesthetic he deployed, saying that in the ‘60s, cinematographers were freeing themselves of the tripod.
“It’s been emulated to the point that now it’s our standard movie style, which is handheld. And handheld today means, like, jerk it around on your shoulder and move it. The people in the ‘60s were trying to hold it really still. They were also trying to get the action, so that was one little aspect of emulating their style. They weren’t trying to make it bumpy, they just…didn’t have a tripod!”
Matt Zoller Seitz at Vulture calls it “the stylistic peak of the series” and talks to Van Sant about creating it with DP Jason McCormick.
“I’ve watched the work of a lot of documentarians, particularly ones who were part of the same movement as Albert and David Maysles,” Van Sant relays. “There was also D.A. Pennebaker, and Frederick Wiseman and Richard Leacock. The films they made were always fascinating to me. They were informing the French New Wave, partly, and by the 1980s, their work influenced MTV videos, as well as films like Oliver Stone’s JFK, which utilized MTV-style camerawork that was emulating the work of documentary filmmakers from that period.”
The director adds that if you construct reality properly, it really doesn’t matter where you put the camera. “If it’s a reality that makes sense, you could shoot it from the corner of the room with your phone. That’s what those documentarians were doing: They went to a location and put themselves someplace, and it was usually the wrong place in relation to where the action was going to be, so they’d have to zoom in to get to the shot they needed. Or they’d try to run over there. A lot of times they got a bad shot. But it was the action you were looking at anyway. You can kind of force yourself into their situation.”
Van Sant does in fact sneak in a few shots from the actual event shot by newscasters of the arrival of some of the guests. And there was no shortage of film of Truman Capote to help recreate his character.
The director is no stranger to experimenting with form, often in stories that meld reality and drama whether giving William S. Burroughs a supporting role in Drugstore Cowboy, or interpreting the life of HarveyMilk (Milk), or shooting Elephant and Last Days, which are reactions to Columbine and the death of Kurt Cobain but not conventional docudramas. His most formal exercise was remaking Hitchcock’s Psycho shot for shot.
“I always try to make a story conform to the reality as I know it,” he told TheDaily Beast. “When I first started out with Drugstore Cowboy, I was putting so much emphasis on blocking. I ended up doing it in the way I understood Stanley Kubrick did his filmmaking: He would work on a scene first and would figure out the shots afterwards. After that, I started working in that manner,” he said.
“As I got more familiar with my cinema, the blocking started to become more and more complicated, because I realized that anything that happens in reality defies the logic of how you would block it in visual fiction. Even with something that happens in a simple, given space, like a convenience store, the way people move and what they do is very surprising. If you were to shoot a basic interaction between two people in a convenience store with your phone and then watch it a couple of times, you’d realize the blocking of reality is quite unexpected. People might enter and exit before they even do anything! Odd things happen all the time. If you can capture those moments and use them in your fiction, you can represent reality in an almost spooky way.”
He adds in the same interview, “Emulating different forms to show different things has always been something to work on, like having a recipe to make. We were doing the same kind of thing on the third episode of Feud but with the films of the Maysles and D.A. Pennebaker. We were trying to approximate a documentary of the Black and White Ball so we could see what it would have been like to capture the black-and-white ball, as opposed to explaining it cinematically. It was an experiment. We were emulating films that existed. Their chaos was inspirational.”
In ‟El Conde,” Chilean director Pablo Larraín turns the story of General Augusto Pinochet into a stomach-turning tragicomic melodrama-horror movie.
March 4, 2024
A Framework for the Future of M&E: The Developments We Can Expect
TL;DR
Media industry consultant Doug Shapiro aims to establish a framework for thinking about how value flows along the media industry value chain over time between creators, intermediaries and consumers.
Shapiro’s framework comprises four “tectonic trends”: fragmentation, disintermediation, concentration and virtualization.
Shapiro says “The greatest hope for creators lies in better monetization tools and business models, not more equal popularity distributions.”
Media industry consultant Doug Shapiro has given detailed consideration to the current state of Media & Entertainment and where it might go to next. It’s a helicopter view to escape the noise of day-to-day news and looks at how media value is created and flows today and tomorrow.
The goal of his four-part series is to establish a framework for thinking about how value follows along the media industry value chain over time between creators, traditional intermediaries, n
ew intermediaries, and consumers. It comprises four “tectonic trends”: fragmentation, disintermediation, concentration and virtualization.
All these trends started with digitization, which has created a universal language (bits and bytes, or data) for information goods and a standard for distribution (also bits and bytes, or data) from which his four trends flow.
As Shapiro puts it, “Bits have universality, in the sense that they are the standard unit for representing, processing, storing and transporting all digital information, regardless of the source, application, device, medium or communications network. This standardization of all media paved the way for all innovation that has followed since.”
The first trend, Fragmentation, is occurring because systematically declining barriers at each step of the content development process (production, marketing, distribution and monetization) have led to a near-infinite amount of content; and because the very introduction of that content is changing consumers’ definition of quality.
What this means is that while time spent on content is effectively saturated, the means by which consumers consume it has splintered into a thousand digital apps and continues to do so.
What’s more, generative AI will only accelerate things. “Production has been the toughest nut to crack, especially for the most expensive and complicated forms of media. The advent of GenAI could pull down this last barrier by democratizing high production value creation of video, music and games and blurring the quality distinction between professionally-produced and independent/creator content.”
There’s a ripple effect. As content supply continues to grow faster than time spent with media, consumer attention becomes scarcer. This means that knowledge of the consumer is highly valuable for companies to target content and advertising.
“As fragmentation continues and the constraints on using third-party data increase (post-GDPR/App Tracking Transparency/Google cookie deprecation), having first party data, at scale, is arguably more valuable and scarcer than ever. Data will likely be a — if not the — key source of competitive advantage in curation and marketing.”
The one area that technology hasn’t managed to commoditize down to bits and bytes — yet — is the scarcity of a live event.
Shapiro’s second post turns to the disintermediation of traditional intermediaries, or the declining bargaining power of studios.
“They have historically taken the lion’s share of value because they do things that have been hard for creators to do themselves such as financing and coordinating production, marketing, monetization and distribution.”
Technology, says Shapiro, is systematically making all these things easier for creators to do themselves, improving creators’ bargaining power or enabling them to circumvent these intermediaries altogether.
“Some intermediaries still own or control very valuable IP, they are marketing machines and many creators will still want the validation of working with them, but on the margin, they are getting squeezed,” he writes.
“And the clear arc of technology is that it will continue to marginalize them. For instance, GenAI will democratize high-quality production tools and NFTs may democratize access to capital.”
The third fundamental trend, concentration, is the consequence of putting everyone on a big network. Networks are subject to powerful positive feedback loops that produce extreme outcomes. In the case of media, they concentrate both power (on the supply side) and attention (on the demand side).
Historically, distribution of media was siloed, local and one-way. Today, most media is distributed on universal, global, two-way networks (Meta, YouTube, TikTok).
“Combined with these companies’ global reach and universality, that means traditional media companies are now contending with distributors and competitors with unparalleled scale, resources and the ability to cross-subsidize losses indefinitely,” Shapiro says.
Creators and consumers, he adds, are contending with “new gatekeepers” with unprecedented power, although they wield it differently than traditional intermediaries.
The network effect also amplifies the popularity of certain content and becomes in and of itself a new form of currency. Popularity (or number of hits/clicks), he essays, contains within it information about content that other people use to judge the quality of the content. It is a metric which is largely taken out of the hands of the original gatekeepers.
With the fourth and last trend, virtualization, Shaprio tries to tie it all together.
Virtualization refers to the steadily blurring lines between the physical and the virtual. For Shapiro this is the most hopeful trend for media overall, but also the most uncertain and furthest out.
“The promise of virtualization is that as our lives become more virtual (and more digital), there will be new ways on interacting with media — new modalities — that increase time spent with media and/or the value consumers place on these experiences,” he writes.
Associated technologies include those that enable new immersive experiences (XR and virtual worlds); more engaging experiences (fan creation and ownership); and new leisure time (AI efficiency gains and autonomous vehicles).
The most tangible of these is the Apple Vision Pro, which could eventually herald a new media software upgrade cycle, and, even further out, Level 4 self-driving, which could free up some commute time. But neither is likely to have a material effect for years.
“The greatest hope for creators lies in better monetization tools and business models, not more equal popularity distributions,” Shapiro says.
“For traditional intermediaries, all of these trends are just…bad,” he argues. “They will likely continue to lose consumption share, cede bargaining leverage to top talent, contend with stronger competitors and face riskier and less profitable businesses.”
AI, bio-technology, and a burgeoning ecosystem of interconnected wearable devices are converging, the Future Trends Institute predicts.
March 4, 2024
Tag, Search, Serve: What You Need to Know About Analytical AI
TL;DR
The ability to tag, search and serve audio-visual assets with speed, ease and human-like intuition is the promise of a new breed of AI tools, analyzed by post-production expert Michael Kammes.
These include Curio, a tool able to analyze and harvest metadata from unstructured video files, just acquired by storage firm Wasabi.
Other post-production tools include StoryToolKitAI, Twelve Labs, Code Project AI and Pinokio.
Generative AI can be used to create audio, stills, and videos but something often overlooked is how useful Analytical AI can be. In the context of video analysis, it would involve facial or location recognition, logo detection, sentiment analysis, and speech-to-text, just to name a few. Analytical tools are the focus of a Michael Kammes podcast, “AI Tools For Post Production You Haven’t Heard Of.”
“Welcome to the forefront of post-production evolution,” he says.
Kammes invites post-production chiefs to take a look at a number of analytical tools. These include StoryToolkitAI, an editing tool that uses AI to transcribe, understand content and search for anything in your footage, integrated with ChatGPT and other AI models. It began as a GitHub project by developer Octimot, runs on OpenAI’s Whisper and Python, and can be used on Blackmagic Design’s DaVinci Resolve among other professional editing systems.
“StoryToolKitAI transforms how you interact with your own local media. Sure, it handles the tasks we’ve come to expect from AI tools that work with media like speech-to-text transcription. But it can understand and execute tasks that it was never explicitly trained for,” he says.
He describes it as a “conversational partner. You can use it to ask detailed questions about your index content, just like you would talk with ChatGPT.”
Kammes likes that StoryToolkit runs locally so users get privacy even while the application itself is open source. He believes the app’s architecture is a blueprint for how things should be done in the future.
“That is, media processing should be done by an AI model of your choosing and can process media independently of your creative software. Or better yet, tie this into a video editing software’s plug-in structure, and then you have a complete media analysis tool that’s local, and using the AI model that you choose.”
While many analytical AI indexing solutions search your content based on literal keywords, others perform a semantic search by using a search engine that understands words from the searcher’s intent and their search context. This type of search is intended to improve the quality of search results.
This is what Twelve Labs seems to have cracked. Its tech can be used for tasks like ad insertion or even content moderation, says Kammes. “Like figuring out which videos featuring running water or depicting natural scenes like rivers and waterfalls or manmade objects like faucets and showers,” he explains.
“In order to do this, you would need to be able to understand video the way a human understands video and what we mean by that is understanding the relationship between those audio and video components and how it evolves over time because context matters the most.”
Cloud storage developer Wasabi Technologies recently acquired Curio AI, a technology developed by GrayMeta that uses AI and ML to automatically generate a searchable index of unstructured data. GrayMeta President and CEO Aaron Edell and his AI team are also joining Wasabi.
According to Kammes, speaking ahead of the acquisition announcement, “Curio isn’t just a tagging tool. It’s a pioneering approach to using AI for indexing and tagging your content using their localized models. Traditionally, analytical AI generated metadata can drown you in data and options and choices, overloading and overwhelming you. GrayMeta simplifies the search process right in your web browser.”
Wasabi is planning to gives its users exclusive access to Curio. It will allow them to easily search their huge archives of unstructured data, something that was not possible before, the company said.
“Imagine walking into Widener Library at Harvard with 11 million volumes, and there’s no card catalog,” David Friend, CEO of Wasabi, told Joseph Kovar at CRN. “That’s what we have right now with unstructured data in the cloud. Our acquisition of this machine learning technology is really going to be the most important development since the introduction of object storage itself.”
He added, “Today unstructured data is still in the dark ages. I believe that what we’re doing here with Curio AI to automatically create an index of every face, every logo, every object, every sound, every word, will really revolutionize the utility of object storage for the storage of unstructured data.”
Wasabi plans to fully integrate Curio into its cloud storage, and not offer it as a standalone technology for other storage clouds.
“It’s going to be one integrated product, and it’s going to be sold by the terabyte just like our regular storage, but at a slightly higher price. And for that, you will get unlimited use of the AI,” Friend detailed.
Curio will automatically scan anything that’s put into Wasabi’s storage and produce an index which can then be accessed using the Curio user interface and one of several media asset management systems including Iconik, Strawberry and Avid. The company expects to go to market with the product later this year “with channel partners who sell into the media and entertainment industry.”
Wasabi even thinks its combination of object storage and Curio is a step ahead of even Amazon, Google and Microsoft in terms of functionality.
“The hyperscalers can’t do what we’re doing with Curio. I mean, they have a toolkit, and you can assemble something like this if you have the time and money. But there’s nothing equivalent to this that anybody else is offering as far as I know.”
Next Kammes addresses Code Project AI server which handles both analytical and generative AI. He describes it as “Batman’s utility belt” where each gadget and tool on the belt represents a different analytical or generative AI function designed for specific tasks.
“And just like Batman has a tool for just about any challenge, Code Project AI Server offers a variety of AI tools that can be selectively deployed and integrated into your systems, all without the hassle of cloud dependencies.”
This includes object and face detection, scene recognition, text and license plate reading, and for even the transformation of faces into anime-style cartoons. Additionally, it can generate text summaries and perform automatic background removal from images.
The Server offers a straightforward HTTP REST API for integration into a facility or workflow. “For instance, integrating scene detection in your app is as simple as making a JavaScript call to the server’s API. This makes it a bit more universal than a proprietary standalone AI framework,” says Kammes.
It further also allows for extensive customization and the addition of new modules to suit specific needs.
Finally, Kammes highlights Pinokio “a playground for you to experiment with the latest and greatest in generative AI.”
Pinokio is a self-contained browser that allows you to install and run various analytical and generative AI applications and models without knowing how to code. It does this by taking GitHub code repositories (called repos( and automating the complex setups of terminals, clones and environmental settings. “With Pinokio, it’s all about easy one click installation and deployment, all within its web browser,” Kammes insists. “It enables you to with various AI services before they go mainstream.”
It already chock full of diverse AI applications to play with, from image manipulation with Stable Diffusion to voice cloning and AI generated video tools. “Pinokio helps to democratize access to AI tools by combining ease of use with a growing list of modules. As AI continues to grow in various sectors platforms like this are vital in empowering users to explore and leverage AI is full potential. The cool part is that these models are constantly being developed and refined by the community,” Kammes says.
“Plus, since it runs local and it’s free, you can learn and experiment without being charged per revision. Every week there are more analytical and generative AI tools being developed and pushed to market.”
Creator Economy Amplified: Lindsey Gamble on the Increasingly Blurred Lines of Influencer Marketing
Watch “Creator Economy Amplified: The Increasingly Blurred Lines of Influencer Marketing.”
TL;DR
Influencer marketing strategist and creator economy advisor Lindsey Gamble shares his expert insights, discussing key trends and his predictions for the future.
Social media platforms are expanding their focus to include a broader array of content producers, Gamble says, blurring the lines between individual creators and traditional publishers.
He highlights the transformative role of generative AI in social media, enhancing creative processes and enabling creators to reach global audiences with innovative tools like AI dubbing.
Brands are increasingly recognizing creators not just as content producers but as influential consumer segments, partnering with them to promote products and engage audiences on platforms like TikTok.
Gamble predicts TikTok will launch its own shopping day, aiming to compete with major retail events like Amazon Prime Day. He ultimately envisions a shift away from traditional social media platforms towards a more integrated digital ecosystem where creators and brands collaborate to build audiences and monetize content more effectively.
Influencer marketing, a concept that has evolved significantly since its inception, is on the brink of reaching unprecedented heights. With the global influencer market projected to grow to $143 billion by 2030, per Statista, the industry stands at a pivotal juncture. Lindsey Gamble, an influencer marketing strategist and creator economy advisor, offers his insights into the transformative trends shaping this space as we head into 2024.
As part of NAB Amplify’s “Creator Economy Amplified” series, we sat down with influencer marketing strategist and creator economy advisor Lindsey Gamble to take a deep dive into key trends from over the past year and unpack his predictions for 2024.
The associate director of influencer innovation at social media management platform Later, Gamble’s extensive experience and insights have not only shaped brands’ influencer marketing strategies but have also provided a roadmap for navigating the evolving landscape of the creator economy. He touches on the increasingly blurred lines between creators and publishers, the integration of AI in social media, and the strategic moves made by various platforms to cater to creators and audiences alike.
Watch the full conversation in the video at the top of the page.
The Evolving Roles of Creators and Publishers
The social media landscape is undergoing a transformative shift, with the roles of creators and publishers increasingly converging, Gamble notes. Social media platforms, traditionally the realm of individual content creators, are expanding their embrace to include a broader spectrum of content producers, including publishers, digital magazines, and content collectives.
The evolving roles of creators and publishers signal a maturing digital landscape, he says, one where the lines between content production and distribution are becoming increasingly blurred. “It’s one of the most fascinating things that I’ve been keeping an eye on over the last year and change.”
The blurring lines between creators and publishers are evident as platforms like Pinterest and LinkedIn adjust their strategies to cater to a wider array of content producers. “Social media platforms have really over-indexed on creators, helping them with new tools, ways to monetize,” says Gamble. “More recently, that’s changed; we’ve seen a lot of these social media platforms go back to the traditional playbook where they’re also turning their attention to publishers.”
Pinterest, for example, has broadened its definition of creators to encompass magazines and digital collectives, opening up opportunities for these entities to participate in its Creator Inclusion Fund. Similarly, LinkedIn, which has heavily invested in supporting creators, is subtly shifting its focus to appeal to a broader user base, including professionals and businesses.
Amid new opportunities, this shift is not without its challenges. “Creators and publishers are kind of in competition today,” Gamble explains. He describes a landscape where publishers are adopting creator-like strategies to produce content that resonates on a personal level, while creators are exploring monetization avenues traditionally associated with publishers. This competitive yet symbiotic relationship underscores the complexity of the evolving creator economy.
GenAI’s Role in Social Media and Content Creation
The integration of generative AI into social media platforms is revolutionizing the way content is created, discovered, and consumed, says Gamble. “AI is here, it’s not going anywhere.”
Snapchat was the first social media platform to jump into the fray, in early 2023, he recalls, “which was really a surprise.” This marked the onset of a trend that major platforms like LinkedIn, Meta, and YouTube soon followed.
“Now we’re [really] starting to see tools and features that are beneficial to creators,” he says. “That can be something as simple as being able to remove the background out of your existing photo and put yourself in a totally different setting.”
However, the integration of AI is not without its challenges, particularly for creators. As brands begin to leverage AI tools to produce their own content, the space for traditional creator-led content could shrink, Gamble suggests.
But the benefits of generative AI are undeniable when it comes to growing your business as a creator, he adds. “YouTube launched a lot of features last year, and one of the standout ones is an AI dubbing tool,” he says, detailing how the ability to release a single video in a variety of languages not only reduces barriers to reaching a wider audience, but also provides new opportunities for monetization. “It’s a great addition to creators in terms of their businesses.”
Social Becomes Search
The evolution of social media platforms into more search engine-like entities is another significant development Gamble highlights. He notes the importance of AI in this transformation, enabling platforms to offer personalized content recommendations and insights. This shift, however, demands that creators optimize their content for algorithms to increase discoverability, a task that AI tools are making more manageable and effective.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“Because social media platforms are becoming very similar to search engines like Google,” he says, creators not only “have to create great content that people are going to resonate with, and content that feeds into the algorithm, but you also have to take a approach similar to SEO for websites.”
Depending in your perspective, this is either an opportunity to increase discoverability or just another task to do as a creator, Gamble remarks. “In addition to posting that content, you also have to figure out ‘how do I write the right captions… that are going to get me in front of people when they’re using these platforms,’ like a search engine, in addition to those discovery mechanisms.”
TikTok’s E-commerce Ambitions
TikTok is rapidly emerging as a powerhouse in the influencer marketing arena, captivating audiences and creators alike with its dynamic content and interactive features, and now making strategic moves into the e-commerce space.
Instagram remains at the core of influencer marketing, Gamble says, noting that the Meta-owned platform still holds the most ROI for advertisers. “If you look at some of the data, most brands are starting on Instagram, but TikTok is definitely picking up, especially for new brands.”
Anticipating TikTok’s next big move, Gamble predicts, “I think we’re going to eventually see TikTok launch a TikTok Shop Day that pretty much is what Amazon Prime Day is.”
He also points out an emerging trend where “creators are becoming a consumer segment,” with TikTok at the forefront of this shift. Brands are increasingly recognizing the value of partnering with creators not just for their content creation skills but also as influential consumers who can authentically promote products. This approach is particularly evident on TikTok, where the platform’s unique ecosystem fosters a close-knit community of creators and viewers, making it an ideal venue for targeted e-commerce initiatives.
The Future of the Creator Economy
As we look toward the horizon of the creator economy, Gamble emphasizes a pivotal shift in how brands and creators collaborate. Understanding the nuanced needs and preferences of creators is becoming increasingly essential for brands aiming to develop effective products and marketing strategies.
“Brands that look at creatives as a segment, not necessarily just launching products, but figure out a way to position their brand and products as a benefit to creators can really tap into the 300 million or whatever the number is today of creators out there that they’re missing out on,” he explains.
Gamble’s insights suggest a future where partnerships between brands and creators evolve beyond traditional sponsored content. By genuinely understanding and integrating into the creator lifestyle, brands can uncover innovative ways to support creators, whether it’s through products that enhance their creative process or services that address their unique challenges. “Talk to creators, consult with them, and look beyond sponsored content to gain valuable insights,” he advises.
Looking ahead, Gamble shares his vision for the creator economy: “Essentially, I think we’re going to move away from social media platforms,” he predicts, “and everything’s just going to be a platform for creators and brands to create and build audiences and monetize.”
The Creative Possibilities for AI Script Generators
TL;DR
From crafting compelling narratives to refining and optimizing every word, AI has taken center stage in the content production process.
Content marketers, creators, bloggers and social media managers can use a myriad of AI tools to write script tailored to resonate with a target audience
Generally speaking, the more detailed the prompt, the better the output. But While AI tools can be incredibly powerful for generating content quickly, they should not replace human creativity entirely
In marketing and creator content, a growing trend involves using AI for text generation. Whether you’re looking to brainstorm ideas for your new promo video or want a detailed script for your demo video, you can jumpstart the process with an automated scripting tool.
AI writing tools are only as good as your prompt. For this reason, many AI writing tools offer templates and the ability to enhance your prompts. The best practice is to iterate with the tool, asking it to refine and change the results until you achieve your desired outcome.
“The first attempt probably won’t be good,” advises reviewer Conner Carey at SproutVideo. “But if you instruct the AI to rewrite the script with specific additional parameters, the output improves exponentially, producing a decent script to expand manually.
Some AI writing tools are built for marketing, storytelling, or both. When choosing an AI tool, review which AI model the tool employs. Some tools use older versions of OpenAI’s ChatGPT or a proprietary AI model.
For businesses and creators needing to create a constant flow of engaging content there are a number of AI powered tools available to speed and polish text. Here are some of those.
ChatGPT is the most famous of the bunch. The chatbot helped generative AI enter the mainstream since launch in November 2022 and has continued to grow in popularity.
Creators work with ChatGPT to support writing scripts or getting past the writer’s block for content, supplementing the work of scriptwriters.
For example, ask it to give you five ideas for an explainer-style video, copy and paste an existing script outline or even a blog article, and ask it to generate a polished script.
Social-media marketing strategist Laura Bitoiu recommends ChatGPT as the number one tool for beginners in AI because of its ease. “There are basically endless options,” she told Business Insider. “I’ve been using it to brainstorm content ideas, write sales pages and copy, and create new digital products.”
Bard is the Google-developed chatbot. It can generate text, translate languages, and write content, among other uses. When a user enters a prompt into Bard, the chatbot forms a response based on information in its database or another source, like Google’s other services.
As with ChatGPT, creators and industry insiders have found various use cases for Bard, from support with idea generation to light editing to writing short text.
Social media consultant Matt Navarra told Business Insider he found Bard useful in generating alternative text and descriptions for images he uses in his newsletter and other types of content.
Jasper is “a significant step above the competition,” rates Carey, and not (only) because of its AI. “Jasper allows you to save information about your company, brand, products, campaigns, and more. When generating copy, Jasper includes specific details based on the context of the prompt.”
It has a prompt enhancer that does the work of generating a more detailed prompt based on your input. Jasper also includes a template specifically for script writing.
Jasper also utilizes a number of large language models including OpenAI’s ChatGPT, Anthropic, and Google. “This allows the platform to generate the most accurate and dynamic content across subjects,” Carey explains.
WriteSonic is considered by Carey to be a less expensive alternative to Jasper. It offers multiple script-writing AI generation tools, including TikTok scripts and video outlines. But it lacks Jasper’s branded customization and doesn’t employ AI models beyond ChatGPT, he says.
AIContenfy uses natural language processing (NLP) algorithms and ML models to generate content that is customized to a business’s needs. The tool can create various types of content, including articles and blog posts. Users can specify their desired tone, style, and target audience, and the tool will generate content that matches these criteria. AIContenfy says it can help businesses improve their search engine optimization (SEO) efforts and help generate content at scale.
Grammarly is an AI-powered writing assistant that checks for more than 400 types of errors in your writing, including grammar, spelling, punctuation, and sentence structure. Available as a browser extension or desktop app, Grammarly integrates with popular word processors like Google Docs and Outlook. It can also review your social media posts, emails, messages, and comments in real-time to ensure that your writing is “flawless and professional.”
Yoast SEO is a WordPress plugin that helps content creators ensure that their content is optimized for search engines. One of its most powerful features is the ability to create and analyze your XML sitemap automatically. This helps search engines like Google to better understand the structure of your website, making it easier for them to crawl and rank your content accordingly.
Wordsmith uses natural language generation (NLG) to produce “human-like writing.” One of its most significant benefits is its ability to produce localized content. This allows companies to create consistent, targeted communications with global customers in multiple languages.
Acrolinx provides guidance and recommendations to authors and content creators to ensure that their writings are in tune with company guidelines and writing style. For example, it highlights faulty grammar, repetitive text, or overused corporate jargon.
The verdict: Not (Yet) the Finished Article
While AI tools have come a long way in producing coherent content, they may still lack the creativity and finesse of human writers.
“One of the primary concerns when using AI tools for content creation is the quality of the output,” says the team at AIContentfy. “While some AI tools can produce well-written and coherent content, others may fall short in generating human-like language.”
They advise: “adding your personal touch and editing for clarity and coherence will ensure the content meets your quality standards.” When using AI content generation tools, crafting clear and precise prompts is key. Specific prompts yield more accurate and relevant results.
But don’t be afraid to experiment with various prompts and settings within the AI tool. “Test different tones, writing styles, or word limits to see what works best for your content goals.”
As it stands, reliance on AI for the entire text process isn’t feasible. It is wiser to use these tools as complementary to support your creative process.
“Use it to generate ideas, brainstorm topics, or draft initial content, and then add your unique perspective and insights to make the content more engaging and authentic.”
NAB Show and FMC introduce digital and in-person AI training workshops and certifications, tailored for the M&E community.
March 3, 2024
Weird is Wonderful: The Adventure of Editing “Poor Things”
TL;DR
Editor Yorgos Mavropsaridis, ACE discusses his longtime collaboration with director Yorgos Lanthimos on the multi-Oscar nominated “Poor Things.”
Mavropsaridis explains how the dance scene on the cruiser became a microcosm for the whole film.
He shares how he had to rip up the rule book for editing when they first met — and continues to do it on Lanthimos’s films to this day.
Editor Yorgos Mavropsaridis has collaborated with director Yorgos Lanthimos for more than 20 years and knew from the first moment they met that he had to ditch all the rules he had learned.
“The first question is ‘what is reality?’ he told Hayden Hillier Smith in an extensive interview at The Editing Podcast about the making of awards season favorite Poor Things.
“From the first collaboration I discerned that this is a guy who wants to say things in a different way, not the usual way we approach themes or character. For Poor Things I discovered many themes that existentially if you like, are about how easy it is to be in a society, which puts some rules on you.
For Lanthimos storytelling is not a didactic experience. “I want you to feel no, it’s more loose, it’s more open to interpretations and feelings,” says Yorgos Mavropsaridis who is Oscar nominated again following his work on previous Lanthimos drama The Favorite.
“All Lanthimos’ films desire a new kind of reality, which has certain rules how an individual can behave and questions whether this behavior is dictated by the character’s needs or by some external force. And of course, it’s the same with Bella Baxter.”
The lead character is played by Emma Stone in what has already been a BAFTA and SAG Award-winning performance.
Mavropsaridis says he still has to go against his instinctive approach to editing. “And I have to surprise myself as well, to create something new and not to repeat the same situations all the time.”
In all his previous films, they had used classical music mostly, but the director commissioned Jerskin Fendrix to compose the music for Poor Things months before the shooting started. Not the exact music as it was in the final film, but the general themes so they could have them in editorial after the first cut.
Lanthimos also used a lot of this music on set, having done this previously on The Killing of a Sacred Deer (2017). “Different music was played back for [Stone] to somehow get inspired by the music — to have this surprise of — for the firsttime — seeing something.There was also music to set the inner rhythm or their external movements because Yorgos likesthe choreography of the actors — not only the facial expressions — and this way, the movement,internal or external, is influenced.”
Almost every scene uses an extremely wide-angle fisheye lens. Mavropsaridis explains there was no discussion with the director about when to use them.
“The usual pattern was a fisheye lens, or the 4mm lens with the iris mask, then a long take with movement combination, zoom in or out with tracking shot. Usually, my editing brain needs a reason to use them.
For example, the first time we used this 4mm lens was when Godwin Baxter went down the stairs, heard the piano playing, and then we cut to him. He looks at her and smiles. At that moment, I thought, “Okay, that 4mm lens would be a nice point of view from this strange man.’ Then the next time was when Max comes in, Bella runs and embraces Godwin Baxter like a baby. I thought it was funny: a grown-up woman being like a baby, maybe seeing it through Max’s eyes for the first time — this strange situation, there are always small reasons. Subliminally they might say something to a viewer.”
Another example is when they are in the cruiser and Bella Baxter says to Duncan Wedderburn, “You’re in my sun!” so Mavropsaridis cuts to the 4mm lens when he throws the books away, “just to punctuate the situation. Different reasons all the time.”
It was the director’s idea from the beginning to have the first part of the film be a kind of homage to the old Gothic films shot in black and white. They then break that by introducing the color picture in the beginning.
“It was broken in an interesting way when Godwin Baxter recites the story of Victoria Blessington: how he found her, being pregnant with the baby, was shot in color,” Mavropsaridis says.
“There was a good juxtaposition between black and white in the office narration and the color of her suicide and the discovery of her body, which also breaks interestingly the time continuum between the two situations that are kept continuous with his narrating tale. Then the rest of the film, after her leaving London, was in extreme color and also in different hues of color. For example, the first part in Lisbon was shot with color negative.”
The scene where Bella dances without a care in the world was edited “incorrectly” by Mavropsaridis initially. He felt the choreography should remain intact when in fact it had to be awkward. The creative idea was that the dance was “a microcosm of the big world of the film.”
“Of course, it was very nice to see her in a situation with other dancers, and I thought it was nice to keep this situation with the other people dancing around her that was so funny. But this was not what it was supposed to be,” he says.
“Bella is about 16 years old at that time. She sees people dancing for the first time, and the particular music excites her and she wants to dance, but she hasn’t danced before, her movements are rough and awkward, but she doesn’t care about what other people would think. And we didn’t have to care if her movements were choreographed or ‘correct.’ It had to be spontaneous,” he continues.
“Everybody wants to control her, so the main part of the choreography we had to keep were these movements: When Duncan puts his arm around her, trying to manipulate it, and she reacts, trying to free herself. This dance scene is a microcosm of the whole life situation.”
Once they had reached this point where everything was in place the cut was three-and-a-half hours. Then they had to deconstruct the whole thing.
“We have constructed it. Now let’s take it apart and see what we can do to try this or that. He’s very precise in what he wants, but usually, the edit has to improvise on how to achieve it,” Mavropsaridis says.
“He doesn’t say much, but since we’ve edited together for almost 25 years now, I know what he means, and I know which way I have to tackle it. I have a lot of freedom from him to try things, even if they were not discussed. If I have an inspiration in the middle of the night, I will do it,” he continues.
“Maybe it works, maybe it doesn’t. After many trials and errors, many hours, and many films together, we have reached a very understanding way of working. I believe that Poor Things was an easier film to edit.”
A discussion at the dinner table about marrying Bella includes flash forwards and flashbacks. This was composed in the edit to cut length and keep the story moving, Mavropsaridis told Steve Hullfish on the Art of the Cut podcast.
“It is a method that we developed on the film we did together, Dogtooth because Yorgos likes to shoot his film in continuity. He doesn’t edit during the shoot so in editorial we felt that this big scene with a lot of discussion going on needed to be compressed.”
Typically, editor and director will have a few issues that can only be resolved in the edit, but there is now a telepathic connection between the pair that is only the result of like minds working together for so long.
“There was a problem about a scene on the cruise ship,” he told CinemaEditor magazine. “While Yorgos was emailing me I sent over my solution and he said, ‘That is exactly what I have in mind.’ I have reached a point of being able to understand his thoughts without talking to him. After so many years I know what the small things are that bother him and what he tries to achieve. At the same time, he has helped me to overcome my laziness of the mind, so it is now easy to me to throw a scene out and do it a different way.
“I always have in my mind Lanthimos’ own phrase — ‘Is that all we can do?’ So I have to prove each time we can do more and better.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 2, 2024
Generative “Eno” Documentary Reshapes the Film for Every Viewing
TL;DR
“Eno,” about the career of famed musician and visual artist Brian Eno, was created as a generative, cinematic documentary.
Instead of a standard bio-doc, filmmaker Gary Hustwit and his collaborators have assembled a “modular” film that shuffles unpredictably between time periods and mediums to offer a composite portrait of its subject.
The technology is developed by Hustwit’s own startup Anamorph, which they call a “generative system” rather than generative AI.
A randomized documentary of the career of legendary techno-music pioneer Brian Eno, in which every screening is potentially and infinitely different, is the latest project to be served up by generative AI.
Enois a generative cinematic documentary: “Like a musical performance that’s different every night, the film creates a unique viewing experience for each audience that takes it in,” explains Matt Grobar at Deadline.
The 75-year old British music producer and visual artist who has worked with David Bowie, U2, Grace Jones and Talking Heads, and who birthed the ambient music genre and frequently mixes technology with art, is ripe for a video retrospective.
“I usually can’t stand docu-bios of artists because they are so hagiographic,” Eno told Variety’s Todd Gilchrist.
So, rather than charting a chronological path through Eno’s career, documentarian Gary Hustwit proposed using a generative system to create a film that would literally be different for every audience that screened it.
“The use of randomness to pattern the layout of the film seemed likely to override any hagiographic impulses,” Eno said.
If that was enough to pique Eno’s interest in the project, for Hustwit the approach was about provoking new ways of creating and experiencing a film.
“I like movies where you learn different things about the subject, but you, as the viewer, make the connections… I always think that’s a lot more rewarding, as a viewer. It’s a different kind of filmmaking, but it’s also a different kind of film watching.”
It helps that the first and last scenes of the 85-minute doc are always the same. Plus, there are certain scenes pinned to the same timeslot in each version, including a scene where Eno discusses generative art.
Everything else, however, can be different, depending on the material the generative program decides to insert.
“It’s kind of a modular approach,” Hustwit explained to Forbes’ David Bloom. “You can learn different facts about that person at different times in the film. In the end, you make the connections as a viewer.”
Like one of media artist Refik Anadol’s AI creations, Eno is going to be different each time it is screened. That poses a problem for film critics, Bloom points out.
To Deadline, Hustwit explained, “There are billions of different combinations that could possibly exist of this movie, and every time you watch it, you’ll never see that version again. So, it’s an interesting experiment. We can change the way that the form of film works [so] let’s talk about the possibilities.’”
Hustwit had another reason for making the film this way too. It’s a showcase for the generative tool (cutely dubbed Brian One) that he has built along with digital artist Brendan Dawes by their startup company Anamorph.
The tech was trained to select scenes from over 500 hours of archival footage and new interviews of Eno as well as animated visuals and music to produce the unique iterations of the doc.
Anamorph spent five years building the software, combining patent-pending techniques with the team’s own knowledge of storytelling. The company says it’s not trained on anyone else’s data, IP or other films.
“The main challenge was creating a system that could process potentially hundreds of 4K video files, each with its own 5.1 audio tracks, in real time,” Dawes tells TechCrunch. “The platform selects and sequences edited scene files, but it also builds its own pure generative scenes and transitions, creating video and original 5.1 audio elements dynamically. The platform also needed to be robust in a live situation, it wasn’t an option to have it crash. So, we did a crazy amount of testing. We can create a unique version of a film live in a theater, or we can render out a ProRes file with its own 5.1 audio mix and make a DCP from that.”
He also stresses, “This is a generative system, not generative AI. I just need to make that clear, because pretty much everything that’s been said about Eno uses the word AI.”
Advertising agencies have apparently expressed their interest, Hustwit reveals to TechCrunch, with one company wanting to make 10,000 versions of a one-minute commercial.
Rather than make its tools publicly accessible, the company wants to collaborate on projects so it can “consider the source material and the overall story goals,” says Hustwit.
“Our main goal is to get the idea out about this new kind of cinema and hook up with great collaborators to help explore this idea.”
Hustwit ponders what an experimental form-pushing director like Jonathan Glazer (TheZone of Interest) could do with something like this.
“You could make a movie that’s always on, always evolving, always changing,” Hustwit told Forbes. “I feel like Eno, it’s really kind of an opening conversation. What’s next? What can we do with this?
A streaming service such as Netflix — which has played with interactive forms of video — could easily generate a different version of the documentary every day, Hustwit added.
However, to TechCrunch he poured cold water on the idea, saying that streaming networks aren’t equipped to dynamically generate unique video files and stream them to thousands of viewers so that each viewer is getting their own version of a movie.
“When we premiered Eno at Sundance, all the big streaming companies loved it, but they also admitted that their systems can’t handle the tech involved… These streamers need to differentiate, and I think enabling the films and shows they’re releasing with generative technology is a way to do that,” says Hustwit.
It’ll likely take years before streaming services adapt to the technology. Until that happens, Anamorph is sticking to live events and theatrical releases.
“Something that the theater industry badly needs right now is a reason to get people to come in, and if there is a uniqueness about the live cinema experience, that’s one way that can be achieved,” he adds.
You may have heard that the creator economy is projected to grow to a $480 billion-industry by 2027, but did you know that there are now approximately more than 50 million creators working in the industry in 2024? And 4% of those workers earn more than $100,000 annually. Those figures are according to Kajabi’s The State of Creators ’24 Report.
This report focuses on the creators earning six-figures or more, assessing what they have in common to divine what makes for success in the creator economy.
How Creators (Really) Make Their Money
First, there’s more to being a successful creator than social media posts and striking brand deals (although creators do say those are key!).
To crack the six-figure ceiling, creators say they must diversify their revenue streams, with five (!) or more sources of income being a differentiator — and those making more than $150k annually report using at least seven to make that salary, according to Kajabi’s survey.
This is especially revealing because 66% of creators say they made the majority of their income from brand deals alone. However, these types of partnerships are fickle, and three-quarters of those who self-identify as top earners say that multiple revenue streams are crucial to financial success. “[D]iversifying their income streams empowers creators to turn down deals that could compromise their authenticity,” according to the report.
However, it’s worth noting that even authentic but successful creators say they can be tempted to compromise – for the right price. More than half of those in the $100k+ bracket said they might work with a brand whose values didn’t align with their own if the payoff was right (a specific number was not named).
So if you can’t go all-in on brand deals, what are other popular options? Those in the six-figure-plus club told Kajabi they make money from passive income (such as digital products, platform payouts and physical products) as well as coaching and consulting jobs, which they may do in person or online.
The most popular digital offerings include: online courses, digital downloads, subscriptions or memberships, and online consulting/coaching.
Social Is About Engagement (and Lead Gen)
The Creator Economy may be synonymous with social media, but they’re not one and the same. As Kajabi puts it: “Social platforms are great for building audiences, not businesses.”
Successful creators are able to translate followers into customers using lead gen tactics, leveraging audience interaction and community into cash generated on platforms that they own.
However, creators are not likely to abandon social media any time soon. Even six-figure creators would have trouble if they lost access to a platform. Losing YouTube would mean missing out on at least $50,000 annually for 42% of creators making $100k+. Instagram going under would account for the same loss for 38% of those surveyed; TikTok would mean the same for 37%; and 36% said the same would be true for Facebook.
Despite those numbers, half of creators indicated that they do not trust the very social media platforms that made them popular. They’ve been burned before, after all. Kajabi notes that it will be interesting to see if TikTok’s Creativity Program will shift creators’ attitudes.
Six-Figure Creator Demographics
The most successful creators as of 2024 are:
Male (57%)
Have a bachelor’s degree or higher education (80%)
Work full-time as creators (86%)
Create content for business and marketing, finance or real estate niches
They also tend to reach this financial tier relatively quickly. Four in ten of the six-figure earners reached that status within two years of working in the Creator Economy. Notably, the content niches that catapult creators into this range the fastest are beauty, fitness and gaming.
Their Thoughts on Today’s Hot Topics
Kajabi also inquired about these creators’ attitudes toward two of the buzziest subjects of 2023: AI and unionization.
AI emerged as a key differentiator for creators who’ve had financial success. They tend to utilize it twice as often as their counterparts making $99k or less, with 29% reporting that they leverage AI tools daily, and 43% say they use it weekly.
“Six-figure creators are bullish on AI in 2024 specifically to save time and ultimately help reduce creator burnout,” according to Kajabi.
Also, success has not made creators want to go it alone. Likely driven by their distrust of the social companies, almost 50% of these creators say they’d be interested in joining a creator union if one were formed. Notably, their interest was higher than that of their peers earning less cash.
The creator economy, engaging tens of millions globally, is on track to reach nearly half a trillion dollars by 2027, with creators becoming pivotal media brands and reshaping the M&E landscape.
This year’s NAB Show also introduces the Creator Lab, a new dedicated experience designed to foster collaboration and learning through panels, workshops, and discussions on content monetization, video transformation, AI, and mobile content creation.
NAB Show 2024 will host a variety of sessions related to the creator economy, including discussions on transforming entrepreneurs into entertainers, the journey of a college thesis to a viral web series, and the integration of brands in influencer content.
Tens of millions of individuals worldwide are engaged in the growing creator economy, which is projected to reach nearly half a trillion dollars by 2027. Creators and influencers are carving out a massive niche in the overall Media & Entertainment landscape, becoming their own media brands, shifting video consumption habits and attracting advertising dollars.
The NAB Show is a central hub for networking, exploration and education within the evolving creator market, offering unparalleled opportunities for insights, networking, and education. Leading the charge at this year’s event, which runs from April 13-17 at the Las Vegas Convention Center, is renowned YouTube star, digital creator, filmmaker extraordinaire and multi-media innovator Casey Neistat, a pivotal figure in the digital creation space known for his inspiring and unconventional approach to creativity.
“Do What You Can’t with Casey Neistat“ takes place on Wednesday, April 17 from 10:30-11:30 a.m. on NAB Show’s Main Stage. Neistat, who has more than 12.6 million subscribers on YouTube and has posted over 1,100 videos, is a New York-based filmmaker, writer, director, editor and star of the series The Neistat Brothers on HBO.
He won the John Cassavetes Award at the 2012 Independent Spirit Awards for producing the film Daddy Long Legs.
Celebrated for his “Do What You Can’t” philosophy, Neistat’s story is a powerful illustration of how thinking differently and embracing innovation can unlock extraordinary creative potential. It also offers a lens into the broader shifts in content creation and distribution. His transition from traditional media to digital success embodies the significant changes in how content reaches audiences today.
Neistat asserts that “Everything now that you’d ever need access to be a great filmmaker is kind of at our fingertips,” highlighting the democratization of filmmaking and storytelling.
“If I had known how to make movies the proper way, then, I would have never gone to Walmart and bought two of their cheapest cameras and turned that into a show that was on HBO,” he says. “It was that lack of understanding of the right way to do things that forced us to find our own path. And I think that was a virtue.”
Reflecting on his career, Neistat notes, “I spent years making this HBO show… nobody saw it, and then I make this silly little video and millions and millions of people saw [it]. That showed me something about distribution.”
This year’s NAB Show also debuts the Creator Lab, a dedicated experience to foster collaboration, learning and networking through panels, workshops, fireside chats and more. Topics include content monetization, video transformation, AI and mobile content creation. Featured speakers include Chris Laxamana, co-host and producer of The Adam Carolla Show and Marc Hustvedt, president of MrBeast YouTube.
Recess Therapy: How a College Thesis Became an Instagram Sensation: Moderator Jasmine Enberg of Insider Intelligence will host Scott Dunn, VP, Talent and Business Development at Doing Things, and Julian Shapiro-Barnum, creator and host of Recess Therapy, to learn how Shapiro-Barnum’s college thesis became a viral web series. The hit series continues to grow as a major talent with brands turning to it to raise awareness of projects and campaigns.
Creator Economy: From Million Dollar Listing to Estate Media, Transforming Entrepreneurs into Entertainers: Cami Lincowski will discuss how entrepreneurs are disrupting traditional industries by thinking like a creator-led brand and developing content that engages, monetizes and scales. Founded by real estate mogul and television personality Josh Flagg, of Bravo TV’s Million Dollar Listing, the newly launched Estate Media is rolling out a slate of compelling and captivating content that professionals and fans crave.
Creator Economy: Lessons from Follow Me’s Influencer Showdown: The creators and distributors of Follow Me — the brand-sponsored reality competition series that gives influencers the chance to make it big on social media, work with major product partners and win a $50,000 grand prize — will give the inside scoop on moving beyond traditional advertising spend to authentically integrate brands on screen. Speakers include Derek Daugherty, business development and sales marketing for M&M’s, FedEx and F1; Michele Fino, head of branded entertainment for Crackle Connex; model, actress and influencer Charlotte McKinney; and John Stevens, CEO of V10 Entertainment.
Check out all sessions related to the creator economy here.
Don’t miss the chance to be part of this transformative experience. Register now for NAB Show 2024 and join a community of creators, innovators, and industry leaders as we explore the future of Media & Entertainment.
The 2024 NAB Show will feature Creator Lab, a new show floor experience helmed by Jim Louderback and Robin Raskin.
February 28, 2024
Posted
February 28, 2024
Spies Like Us: The Collaborative Post on “Mr. & Mrs. Smith”
TL;DR
“Mr. & Mrs. Smith” editor and co-producer Greg O’Bryant talks about crafting the new TV series from the makers of “Atlanta.”
There were multiple editors across the eight episodes, including Kate Brokaw, Kyle Reiter and Isaac Hagy, but O’Bryant cut the pilot, and served as the conduit through which picture, VFX and sound passed.
He says the show’s reshoots were “no big deal,” and actually helped the final show come together.
The premise for new Amazon Prime Video series Mr. & Mrs. Smith is like online dating. “You show up and meet somebody and you see how far it goes,” says the show’s lead editor Greg O’Bryant.
It is of course a riff on the 2005 hit feature starring Angeline Jolie and Brad Pitt, a married couple who are also spies working sometimes for and sometimes against each other to ‘hilarious’ effect.
The Amazon redo is from the storytellers from critically acclaimed series Atlanta, Donald Glover, and writer Francesco Sloane. Glover and Maya Erskine star as the two title characters (John and Jane), who operate as an undercover married couple while working for a mysterious spy agency.
“Technically, it exists in the same cinematic universe, which there are some hints about in episode four. But it was everyone’s intention to tell a different story with a different tone,” O’Bryant explains in interview with Matt Feury on The Rough Cut.
Showrunner Francesca Sloane used the analogy of the “storytelling sandwich.”
“That means the characters’ relationship is the bread and the spy stuff is the meat of the sandwich. It’s there, but the ratio should be skewed towards the relationship parts. We wanted to tell a story about modern relationships, why we get into them, and why we stay in them. If you think about it.”
He adds, “John and Jane had a pretty dire and immediate reason to stay together, but I think it was more about the heart than it was about action. Hopefully we achieved both.”
There were multiple editors across the eight episodes, including Kate Brokaw, Kyle Reiter and Isaac Hagy, but O’Bryant cut the pilot, has a hand in each episode, and is also a co-producer on the show. “I tell everybody that I know just enough to be dangerous,” he joked.
“I think it’s helpful to have an editor’s perspective on everything, whether it’s reshoots, color, sounds, visual effects, or music. It’s different on a film. Film is a director’s medium. TV is a writer’s medium. Writers usually appreciate having a technical head in the game.”
He was heavily involved in creating the score with Sloane, composer David Fleming, and Donald Glover, approving the VFX and working with Harbor Sound on the audio mix.
“The idea is, once we start editing, everything comes through my room before it’s done, whether it’s another editor’s cut, a VFX shot, a music cue, or a score cue.
“What makes TV special is when it feels like it’s all done by the same hand. It’s not just me. Francesca might be in my room. The other editors might be in my room. But it’s all going to come through the same tiny pipeline and, hopefully, that adds a level of consistency to it all.”
There was some pretty extensive reshoots, which O’Bryant says was helpful. “I know the larger world tends to think, ‘Oh, reshooting? Something must have gone wrong,’ [but] I think it almost always helps to go in and pick up a couple of shots for this or that episode, or even add in scenes,” he says.
“We redid whole scenes for the Mr. and Mrs. Smith pilot. They reshot two scenes entirely in the same location and everything. The team looked at what we had and said, ‘Hey, we’ve learned some things about this show. Let’s go back and do it a little differently.’”
In one episode, John and Jane are seeing a therapist about their marriage troubles. Within that we see vignettes of different missions they’ve been on.
O’Bryant says this episode had the most straightforward comedy and the most improv. “There’s a fair amount of improv in the show. Donald and Maya are both comedians, so there’s a lot, but that one had the most improv.”
Episode four was originally planned with an extensive action sequence in the Mexican jungle but the trick was making it work with comedy as well as thrills in the edit.
“We tried it a bunch of different ways. We spent weeks tweaking just that little sequence. We really beat the bushes before coming up with that fast, vibe-y, out-of-control thing we ended up with. I think we found a good middle ground. But more importantly, that’s the tone of the show. The tone of the show is about how to get that hard cut to be funny. We need to remember that these guys aren’t that good at being spies.”
Keeping the audience guessing without tilting bias was key for “Anatomy of a Fall” director Justine Triet and editor Laurent Sénéchal.
February 27, 2024
Navigating the Creator Economy: Leveraging AI for Influencer Marketing
TL;DR
Artificial intelligence is significantly transforming influencer marketing by enhancing content creation, improving discovery processes, and providing advanced metrics for campaign analysis.
According to Ogilvy’s “2024 Influence Trends You Should Care About” report, hyperpersonalization, where AI tailors content to individual users, is a key trend for 2024, leading to more personalized and engaging influencer interactions.
The introduction of AI-generated virtual influencers, such as Meta’s AI Personas, marks a shift towards personalized, one-to-one interactions between influencers and fans, using digital replicas of celebrities.
Despite the benefits, there’s a growing concern about maintaining authenticity in influencer marketing as AI becomes more prevalent. The challenge lies in leveraging AI without losing the genuine connection that audiences value.
Experts argue for a balanced approach to using AI in influencer marketing, emphasizing the importance of not letting AI overshadow the creativity and authenticity that are central to successful influencer campaigns.
In the ever-evolving landscape of influencer marketing, one of the biggest key trends to watch for in 2024 is the increasing use of artificial intelligence. From the integration of AI technology in content creation to enhanced metrics and personalization, AI’s integration into influencer marketing is reshaping how content is created, how audiences are engaged, and how campaigns are measured for success.
Over the past year, discovery has gotten a boost with the development of AI-powered tools that analyze vast amounts of social media data to identify potential influencers who best match a brand’s values and target audience. AI can also predict the potential success of an influencer marketing campaign by analyzing historical data. In addition, AI-driven platforms can assist influencers in generating content by suggesting captions, hashtags, and even optimizing image and video quality.
AI tools can segment an influencer’s audience based on various criteria, enabling brands to effectively tailor their messaging to specific demographics. AI can additionally provide real-time analytics and performance insights, allowing brands to track the success of their influencer campaigns. Finally, AI algorithms can also help identify fake followers and engagement, a critical concern that goes well beyond the influencer marketing industry.
“None of us can be sure of all the ways AI will impact the influencer marketing industry, but it’s a safe bet to say more influencers will be using AI next year,” Danielle Wiley, founder and CEO of influencer marketing agency Sway Group, writes at Forbes.
“AI can help with all sorts of content creation tasks, from writing captions to editing photos and videos,” she notes. “AI can also help influencers (and brands) figure out what their followers like by analyzing comments and likes to suggest what kind of content should be posted next. Tools that utilize AI will make it easier for influencers to quickly create high-quality content, keeping their followers engaged.”
In 2024, the influencer marketing industry is poised at an exciting intersection of technology and human creativity. The brands and influencers who navigate this intersection skillfully, using AI as an ally to enhance their authentic voice, are likely to emerge as the frontrunners in a rapidly evolving digital landscape.
Hyperpersonalization is the Next Big Thing
AI will extend its reach even further into influencer marketing in 2024, industry experts agree. Hylerpersonalization, which tailors content to each individual follower’s specific preferences and behaviors, is one of the biggest AI trends to watch out for this year, according to a report from Ogilvy, “2024 Influence Trends You Should Care About.”
“In 2024, expect to see a more hyperpersonalized form of engagement with influencers as the interplay between influence and AI enters a new era,” the powerhouse agency predicts.
John Harding-Easson, Ogilvy’s head of influence for EMEA, discussed the increasing use of AI in influencer marketing during a video presentation of the report. “It’s been a really big year for AI, particularly in influence,” he said. “It’s expanding at a rapid rate.”
While AI was already firmly on Ogilvy’s radar in its 2023 report, Harding-Easson said, “I think we were quite surprised at just how much space has advanced.”
Looking at next year, he continued, “What we’re expecting in 2024 is AI influence to enter a new chapter, one that will see a pivot to more hyperpersonalized engagement with influencers.”
As influence continues down this hyperpersonalized path, “we’re seeing that personalization with an influencer isn’t just the feature,” he added. “It can be the foundation of a campaign.”
Virtual Celebrity Influencers
Companies already tapping the power of virtual influencers for hyperpersonalization include Meta, which is poised to capitalize on the trend, per Ogilvy’s report.
“Meta’s AI Personas, introduced in late 2023 and fully deployed in 2024, signal a significant shift from broad-reaching influence to personalized, one-to-one interactions that maintain a sense of authenticity,” it reads.
First introduced at Meta Connect 2023 as AI chatbots that have “personality, opinions, and interests, and are a bit more fun to interact with,” Meta Personas employs AI replicas of celebrity influencers to attract engagement, as Fortune’s Alexandra Sternlicht reports.
“Developed in partnerships with stars such as Charli D’Amelio, Tom Brady, and Kendall Jenner, the bots use the magic of generative AI to create animated digital replicas of the celebrities. Users of Meta’s WhatsApp, Instagram, and Messenger, can have one-on-one interactions with the bots, asking them questions, confiding in them, and laughing together at their jokes.”
“Meta Personas really does signal a shift in the dynamic between influencers and their fans,” Harding-Easson emphasizes. “The ability to chat with your favorite influencer, your favorite celebrity is an experience that was previously unimaginable to fans and it’s really just going to change what we expect from our influencers as well.”
As with any technology still in its infancy, however, “some of the experiences are quite clunky at the moment,” he acknowledges. “Some of the conversations feel forced, but this is only round one.”
Virtual celebrity influencers are “just the first new form of personalized communication and content discovery,” Harding-Easson adds, noting that Meta has also announced plans to allow users to create their own AI replicas later this year. “So scaling interactions with AI influencers is really ripe with opportunities.”
Alex Dahan, founder and CEO of global creator marketing company Open Influence, discussed the implications of AI influencers for advertisers with Marketing Drive. “These virtual personalities, created using advanced AI technologies, can offer brands new and innovative ways to engage with audiences, especially in the realms of fashion, technology, and entertainment,” he said.
However, the reliance on AI influencers comes with its own set of challenges. One key concern is maintaining the authenticity that is the hallmark of successful influencer marketing. There’s a delicate balance to be struck: leveraging AI for enhancing creativity and efficiency, while ensuring that the content remains genuine and relatable. Overuse of AI risks creating a disconnect, as audiences tend to value content that resonates with real-life experiences.
“With AI, it’s all about finding the right balance — using advanced tools to enhance creativity and efficiency, while keeping authenticity and ethical considerations at the forefront.” says Wiley.
Perhaps anticipating the advent of AI in influencer marketing, last year Ogilvy called for an industry-wide AI Accountability Act that would mandate disclosure around the use of AI influencers, Harding-Easson said. Meta, he says, is already moving in this direction, deploying watermarks on its AI Personas to help users clearly distinguish the virtual from the real.
According to Rafa Titus, global head of influence at Ogilvy, “32% of people already can’t tell a human face from an AI face, and that’s going to go up.” The Accountability Act, Titus says, is aimed at “really making sure that marketers are disclosing their use of AI influencers so that that trust that we have in the space doesn’t go away.”
“As the industry tries to get ahead of the next phase of the creator economy, naturally many are looking to artificial intelligence. They’re not necessarily creating AI influencers a la Lil Miquela, but offering product imagery that can supplant influencer-styled photo shoots and more personalized product recommendations via chat bots,” Tassin observes.
The promise of generative AI does have some overlapping value with what influencers provide, she says. “And yet, while GAI tools like ChatGPT took the world by storm this year and captured the imagination of retail industry leaders, consumer trust in AI didn’t necessarily follow the hype.”
The exciting potential of customer-facing generative AI for e-commerce brands won’t be realized in 2024, Tassin predicts. “Retailers should resist chasing shiny objects and look to consumer adoption of other types of retail tech for signals of the adoption curve,” she warns. “If GAI tools follow patterns similar to that of AR product visualization and size measurement tools, widespread adoption of customer-facing GAI is a long way away from replacing the human faces and expertise that influencers provide. Instead, retailers should ensure their influencer relationships are in sync with best practices based on what actually drives purchases on social media.”
Wiley agrees that influencers and their partnered brands should be cautious about an over-reliance on AI “While AI tools are great for streamlining tasks, using them too much can make posts seem artificial or overly polished, risking the loss of that genuine, real-life connection with followers,” she says. “There’s a risk of influencers becoming too dependent on AI for content ideas, which might lead to a disconnect with what their audience truly values. Lastly, there are ethical concerns. For instance, if an AI tool creates too artificial of a scene, it could mislead followers, causing trust issues.”
Adweek’s Adam Rossow also argues that influencers won’t necessarily look to AI for content creation. “While brands and agencies will tap into AI to help guide their choice of influencers, analyze campaigns and brainstorm on their creative briefs, we will see far less reliance on AI from the influencers themselves,” he writes.
“Leaning on AI takes away from their pride of creative ownership and has some fearing plagiaristic repercussions. However, what will ultimately drive influencers to eschew AI in formulating creative is the one thing they hang their hats on: authenticity.”
“Without authenticity, the relationship between an influencer and their followers is tarnished, if not completely broken. The lure of saving precious time and creative energy is not great enough for influencers to risk augmenting their voice or coming off as even remotely synthetic. AI will make influencer marketing more approachable, streamlined and measurable, but it won’t be embraced as an easy button for influencers and creators.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The Creator Economy is reshaping digital advertising, offering brands authentic engagement and accelerating the consumer purchase journey.
February 27, 2024
How to (Comprehensively) Compare Cameras: HBO’s CAS at NAB Show
TL;DR
The HBO Camera Assessment Series is a feature-length movie that demonstrates various camera systems. In Las Vegas this year, NAB Show will screen the most recent assessment.
Additionally, CAS co-creators Stephen Beres and Suny Behar will discuss the test methodology, the technology changes and the type of analysis deployed.
Then, on Tuesday, April 16, CAS co-creators Stephen Beres and Suny Behar will discuss the test methodology, the technology changes impacting the style as well as the type of analysis deployed.
For the most recent assessment, Behar and Beres used Zeiss Cinematography lenses to test camera systems including the ARRI Alexa Mini, SONY Venice 2, RED V-Raptor, BLACKMAGIC Ursa 12K, ARRI Alexa 35 and KODAK 5219 Film.
“When we started 10 years ago,” recalls Behar, “a lot of the questions weren’t about comparing the performance and the quality of the cameras as much as it was comparing whether or not some cameras even could perform.
“There was a vast difference between a camera that could record even 10 bit 4:4:4 versus a [Canon] 5D that was 8-bit 4:2:0, so you couldn’t do green screen work; there was significant motion artifacting; and it was difficult to focus. Those larger differences aren’t what we’re looking at now because all the cameras can do at least 10-bits 4:2:2 minimum. They at least have 2K, super 35 sized sensors.”
There continue to be differences, some quite significant, among the tested cameras, Behar adds, “but it’s in different realms. The tests are no longer about [finding] where the picture just breaks, but as people expect more, there are other issues we investigate.”
There are circumstances people wouldn’t have tried to shoot a decade ago that are becoming standard expectations of a DP.
“You are going to care about signal to noise if you’re trying to shoot with available light, where some cameras will be significantly noisier than others. In the world of HDR, if you’re shooting fire or bright lights, you are going to care about extended dynamic range in the highlights, if you hope to not have to comp all your highlights in with the effects because [the highlights] broke.”
Stephen Beres explains that these tests, which have screened at various venues, serve as the start of discussions for his networks’ productions, not as any kind of dictate.
“We don’t have a spreadsheet of allowed and disallowed,” Beres explains. “What we have is projects like this, so when we sit down together — the studio and the creative team on the show — and we look at these kinds of things as a group, it can help us start the discussion about the visual language of the show. ‘What visual rules should be set up for that world that that show exists in?’
“And then we sort of back that into the conversation about ‘what technology are we going to use to make that happen?’. And that’s not just about cameras. It’s the lensing. It’s what we do in post, and it’s how we work with color. It’s how we work with texture. All those things go together to create the visual aesthetic of the show.”
Once they complete a new installment in the CAS, the company is delighted to share the results with all who are interested. Beres and Behar have both taught about production and post on the university level, and they clearly enjoy sharing their knowledge.
The Assessments
A great deal of thought goes into designing these camera tests in order to display apples-to-apples comparisons, with elements such as color grading and color and gamma transforms all handled identically.
“I think all of the cameras we tested this time shot RAW,” Behar says, “so then you have to make decisions about how you’re going to get to an intermediate [format for grading].”
They decided to use the Academy Color Encoding System (ACES) as a working color space. While there are certainly some people in the cinematography and post realms who still have various issues with ACES, Behar says, it has been useful in some ways because ACES forced every manufacturer to declare an IDT whether they liked it or not.
The IDT, or Input Device Transform, along with the ODT (Output Device Transform), provides objective numerical data quantifying the exact responses of a given sensor so that it can be transformed perfectly into ACES space.
While some manufacturers were reluctant to subject their sensors to such scrutiny (where little tricks involving after-the-fact contrast and saturation, etc., can’t hide their flaws), all did come around because of the growing adoption of ACES and its support from the Academy of Motion Picture Arts and Sciences and the American Society of Cinematographers.
Because of this, the ACES imagery upstream of any color grading really does provide a look into a sensor’s dynamic range, color and detail rendering.
Then, the CAS did the same across-the-board grade (no secondaries, no Power Windows) and transform to deliver final rec. 709 images for all the tested cameras to test many of the different sensors’ attributes and liabilities. Next, to test in HDR, they derived a PQ curve from the same picture information and opened it up without any further adjustments.
“The only test that we did not go through that exact pipeline for,” says the cinematographer, “was the dynamic range test. I’ve always felt that the ACES-rec 709 transform is too contrasty, meaning it has a very steep curve and a very high gamma point, which tends to crush blacks and push up mids. It does give you a punchy image, but if we’re testing dynamic range, and especially in low light, the first question the viewer would have would have been, ‘is there more information in the blacks?’ or ‘how did you decide what to crush?’ and those are very valid points.”
For this, Behar shot a very large number of test charts, that gives them the ability to map their own gamma transform. Shooting in log form at key exposure and at many steps over and under, the team is able to lock in an across-the-board standard for middle gray based on each camera system’s log profile. Once each camera is set up for perfectly exposed middle gray, the tests of over- and under-exposure can be objectively compared.
Given that a number of the cameras tested reached approximately 18 stops of dynamic range, I enquired whether such a capability is overkill. Circumstances where a cinematographer would actually use that much dynamic range are few and far between. More likely, they’ll want to use lighting and grip gear to limit such situations, as they always have.
“That’s right,” says Behar. “I think most DPs won’t need more than 12, maybe 13, stops of dynamic range to tell a story. You can’t hide a stinger in the shadows if you’re seeing 10 stops under. You can’t have a showcard in the window if you’re seeing 12 stops over.
“But then it stands to reason that the camera manufacturers should allow us to use that information to create soft knee rolloffs and toe rolloffs for lower dynamic range, but with beautiful rolloffs into the highlights and the shadows.
“You can’t create a look [digitally] that is like Ektachrome, with maybe four stops over and three and a half under, if you’re clipping at four stops. You need to burn and roll and bleed and have halation. With the dynamic range on some of these cameras we’ve tested, you can do more than just light for an 18-stop range.”
Behar and Beres both take great pride in these CAS films, which are shot and produced to feel like high-quality HBO-type programming, not just charts and models sitting in front of charts.
“This is real scenes with moving cameras, moving actors,” says Behar, promising the cinematography and production is of the highest caliber. “The number one feedback response we’ve gotten so far has been, ‘Holy crap! I thought this was going to be a camera test!’”
Shadow of a Doubt: The Very Deliberate Editing for “Anatomy of a Fall”
TL;DR
The tension between truth and doubt and what we see and hear on screen is at the heart of the story and the filmmaking in “Anatomy of a Fall.”
Editor Laurent Sénéchal talks about the challenges of maintaining ambiguity around the lead character for Justine Triet’s Oscar-nominated thriller.
Triet seems to be putting the nature of truth under layers of translation, explaining that the question of the language is at the core of their work.
There are no easy answers to “did-she-do-it?” mystery Anatomy of a Fall, but neither is this the switch-back sensationalism of Hollywood courtroom dramas like Presumed Innocent or Jagged Edge. Keeping the audience guessing without tilting their bias either toward or against the accused at the center of the drama was the key for director Justine Triet and editor Laurent Sénéchal.
“We didn’t want to play a game, having the audience feeling that she’s guilty for 15 minutes, then she’s not guilty,” the editor told Steve Hullfish’s Art of the Cut podcast. “We wanted the audience to keep their doubts about her, but to start to be endeared by her — to start to be with her in these intimate moments. It was a challenge for Justine to ask the audience to do both keep doubts, but also ask the audience to love her.”
Anatomy of a Fall won the Palme d’Or at Cannes, is nominated for five awards at the Oscars, including Best Picture and Best Editor for Sénéchal, who is also nominated for an ACE Eddie award.
At the beginning of the film, a man dies and like many thrillers, there’s a pacing of the revelations — the things that are discovered about the death. Triet and Sénéchal however are constructing, or deconstructing, the courtroom drama genre.
Sénéchal says it was really important to be precise with these elements to maintain the ambiguity around Sandra, the accused (played by Sandra Hüller, who is nominated for an Oscar for Best Actress).
“The idea was to start like a thriller movie. We were aiming to use this genre movie to lead the audience as far as possible in the complexity with our characters. It’s a movie that is not straight forward.”
The film questions the nature of love and married relationships, what is to be an individual in a couple? asks what is to be a father, what is to be a son, quizzes our memories and how we construct truth in real life and in the movies.
“It’s really complex. So, you’re going to see a thriller, but an unusual one. We had to pay attention to this idea during the editing process.”
Sénéchal goes into more detail in discussion with Awards Radar, telling Maxance Vincent, “It was really challenging because as soon as we had scenes in a certain order or scenes showing some things between her and her husband Samuel [Theis], you could have a total derailment. We could derail the main contract between the audience and the movie because we’d edited scenes in a certain way, where we felt like Sandra was being manipulative towards Samuel.
“So we had to rethink, screen the movie, and redesign some scenes, to make sure that we find her endearing, even if we have doubts about her. It was really hard to build the path of the audience. You are free as an audience to make up your own mind about what you see. My job as editor is to build very wide roads for the audience to make their own journey into the movie.”
“Anatomy of a Fall” | Official Clip – “You Are Not A Victim”
Sénéchal spent 40 weeks in editorial to shape the picture. There’s a section in the trial where an audio recording of a fight is being played in court, and it starts with just the audio, then it jumps to flashbacks of the actual fight.
“I asked them to shoot it in a way that we have options,” he related to Hullfish. “We can stay long in the courtroom before going in the flashback if we want to because I knew that this moment was going to be tricky for me. They got very long shots on Sandra Hüller. Also they got the audience in the courtroom.”
He continued, “What worked was to be long enough for the audience to be a bit lazy; they start to get used to the audio, and that’s when I go into the flashback, and you are very soon taken by the fight itself. Then, coming back into the courtroom we wanted it to be at the highest climax of the fight. But the climax — the words — what she’s saying to her husband — is so harsh. It’s really violent. The words are like weapons.”
Sénéchal had previously collaborated with Triet for 2016’s In Bed with Victoria and 2019’s Sibyl. Director and editor discuss their relationship in an interview recorded for Deadline’s The Process, as well as filming scenes with the film’s canine character and the choice to use different languages.
In Anatomy of a Fall, the characters live in France, but since the main character, Sandra, is not herself French (nor does she speak it very well), most of her dialogue is spoken in English.
This includes her appearances in court where after attempting to give her evidence in French, she gives up and speaks in English for the rest of the case, resulting in a rather strange scenario of her being questioned in French, understanding perfectly, and responding in English.
Triet seems to be putting the nature of truth under layers of translation, telling The Process the question of the language is at the core of their work.
Sénéchal adds, “We also wanted the movie to be simple for the audience because the subject was so complex. There is a complexity in the empathy for the main character.”
Even when the verdict is reached in the case there’s still a lingering sense of ambiguity which bleeds into the moment that Sandra is reunited with her son.
“What we wanted to show is the arc of a boy who is growing up,” he explained to Awards Radar. “You still don’t know the mother, you are starting not to know how the boy is feeling. When they’re reconnecting in the house, everything is so complicated.”
He adds, “The movie shows how you must stop thinking of life as straight, simple, and compact. Becoming a grown-up for him is becoming opaque, too, because at the end, when he is doing his second testimony, we see him calling on memories, but it feels like an invention. We want the audience to feel that when we have these images, who do we ultimately suspect? There is a tension between truth and doubts and what is on screen. We don’t have access to everything he’s thinking, and he may become like his mother, someone we don’t know. But it’s our condition to listen to them and make up our minds about what is on screen.”
Elaborating on this to Kara Warner at Vanity Fair, Sénéchal said he identified that the flashback argument was “a very strange scene” in the script. “At the beginning, I was wondering if it was going to work because it’s nobody’s point of view at all. Then I saw the material and when we started, it was obvious that it has to be like that. That’s the power of cinema. It can seem weird when you read it, but when you are in front of the actors, the characters, it’s so vivid. It’s at the heart of the story.”
Speaking on The Rough Cut podcast, Sénéchal discussed turning the thriller into Kramer vs Kramer as the drama pivots on how we view the relationship between husband and wife as we learn more private details about them.
The nuances in their relationship script stem from the script but the writer-director and editor still had to extract the right balance from the coverage in editorial.
“It’s not a movie which was heavily recut in post so much as redesigned,” Sénéchal told Vanity Fair. “The main aspects of the movie were really well-scripted. We made deliberate choices like the fact that we didn’t use any score music. I think it was a good choice because if we had divided the argument in pieces, in sections, it would’ve been another movie.”
Sénéchal compared the delicate juggling act to playing Tetris. “If [we] changed some slight details in the beginning, you could really see another movie emerge. Sometimes we had some derailment of the ambiguity around Sandra. The movie was no longer very interesting when she was becoming too innocent or too guilty, or too manipulative. The main challenge for the editing was this arc of ambiguity for her, how to stay with her, how to be endeared by her with this ambiguity still around her. It was really hard to do.”
Sora prompt: Reflections in the window of a train traveling through the Tokyo suburbs.
Late last week, OpenAI announced a new generative AI system named Sora, which produces short videos from text prompts. While Sora is not yet available to the public, the high quality of the sample outputs published so far has provoked both excited and concerned reactions.
The sample videos published by OpenAI, which the company says were created directly by Sora without modification, show outputs from prompts like “photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee” and “historical footage of California during the gold rush.”
At first glance, it is often hard to tell they are generated by AI, due to the high quality of the videos, textures, dynamics of scenes, camera movements, and a good level of consistency.
OpenAI chief executive Sam Altman also posted some videos to X (formerly Twitter) generated in response to user-suggested prompts, to demonstrate Sora’s capabilities.
Sora combines features of text and image generating tools in what is called a “diffusion transformer model.”
Transformers are a type of neural network first introduced by Google in 2017. They are best known for their use in large language models such as ChatGPT and Google Gemini.
Diffusion models, on the other hand, are the foundation of many AI image generators. They work by starting with random noise and iterating towards a “clean” image that fits an input prompt.
A video can be made from a sequence of such images. However, in a video, coherence and consistency between frames are essential.
Sora uses the transformer architecture to handle how frames relate to one another. While transformers were initially designed to find patterns in tokens representing text, Sora instead uses tokens representing small patches of space and time.
Leading the Pack
Sora is not the first text-to-video model. Earlier models include Emu by Meta, Gen-2 by Runway, Stable Video Diffusion by Stability AI, and recently Lumiere by Google.
Lumiere, released just a few weeks ago, claimed to produce better video than its predecessors. But Sora appears to be more powerful than Lumiere in at least some respects.
Sora can generate videos with a resolution of up to 1920×1080 pixels, and in a variety of aspect ratios, while Lumiere is limited to 512×512 pixels. Lumiere’s videos are around five seconds long, while Sora makes videos up to 60 seconds.
Lumiere cannot make videos composed of multiple shots, while Sora can. Sora, like other models, is also reportedly capable of video-editing tasks such as creating videos from images or other videos, combining elements from different videos, and extending videos in time.
Both models generate broadly realistic videos, but may suffer from hallucinations. Lumiere’s videos may be more easily recognized as AI-generated. Sora’s videos look more dynamic, having more interactions between elements.
However, in many of the example videos inconsistencies become apparent on close inspection.
Promising Applications
Video content is currently produced either by filming the real world or by using special effects, both of which can be costly and time consuming. If Sora becomes available at a reasonable price, people may start using it as a prototyping software to visualize ideas at a much lower cost.
Based on what we know of Sora’s capabilities it could even be used to create short videos for some applications in entertainment, advertising and education.
OpenAI’s technical paper about Sora is titled “Video generation models as world simulators.” The paper argues that bigger versions of video generators like Sora may be “capable simulators of the physical and digital world, and the objects, animals and people that live within them.”
"a giant cathedral is completely filled with cats. there are cats everywhere you look. a man enters the cathedral and bows before the giant cat king sitting on a throne."
If this is correct, future versions may have scientific applications for physical, chemical, and even societal experiments. For example, one might be able to test the impact of tsunamis of different sizes on different kinds of infrastructure — and on the physical and mental health of the people nearby.
Achieving this level of simulation is highly challenging, and some experts say a system like Sora is fundamentally incapable of doing it.
A complete simulator would need to calculate physical and chemical reactions at the most detailed levels of the universe. However, simulating a rough approximation of the world and making realistic videos to human eyes might be within reach in the coming years.
Risks and Ethical Concerns
The main concerns around tools like Sora revolve around their societal and ethical impact. In a world already plagued by disinformation, tools like Sora may make things worse.
It’s easy to see how the ability to generate realistic video of any scene you can describe could be used to spread convincing fake news or throw doubt on real footage. It may endanger public health measures, be used to influence elections, or even burden the justice system with potential fake evidence.
Video generators may also enable direct threats to targeted individuals, via deepfakes — particularly pornographic ones. These may have terrible repercussions on the lives of the affected individuals and their families.
Beyond these concerns, there are also questions of copyright and intellectual property. Generative AI tools require vast amounts of data for training, and OpenAI has not revealed where Sora’s training data came from.
Large language models and image generators have also been criticized for this reason. In the United States, a group of famous authors have sued OpenAI over a potential misuse of their materials. The case argues that large language models and the companies who use them are stealing the authors’ work to create new content.
It is not the first time in recent memory that technology has run ahead of the law. For instance, the question of the obligations of social media platforms in moderating content has created heated debate in the past couple of years — much of it revolving around Section 230 of the US Code.
While these concerns are real, based on past experience we would not expect them to stop the development of video-generating technology. OpenAI says it is “taking several important safety steps” before making Sora available to the public, including working with experts in “misinformation, hateful content, and bias” and “building tools to help detect misleading content.”
OpenAI shares a first glimpse at its new generative AI text-to-video tool Sora, which instantly generates videos from just a line of text.
February 22, 2024
Posted
February 22, 2024
AI Solutions for Climbing (Captioning) the Content Mountain
TL;DR
Artificial intelligence can do remarkable things, and its transcription capabilities are only expected to expand over time. Currently, however, AI transcription technology is not without its limitations.
Those limits, particularly in accuracy, and the lack of a legal framework around synthetic voices, means broadcasters are advised to experiment and use AI with care and human supervision.
Dubbing is likely to be transformed by AI where human talent might morph into“AI Dubbing Managers,” or Creative Directors.
Transcription is one process that stands to be uniquely impacted by recent developments in AI. Thanks to ever-evolving language and learning models, transcribing audio to text has never been faster or easier. But there are also limitations to new AI-powered transcription solutions.
The global translation service market will exceed $47 billion by 2031, largely driven by media and entertainment. Yet current costs to caption titles for distribution on streaming services ranges between $60-$100 per program hour, and typically takes between 1-3 days to complete “because of excessive manual intervention” claims Cineverse CTO Tony Huidor.
“Captions, and localization more broadly, are generally major pain points for content owners seeking to monetize their assets across the many streaming services,” Huidor added.
That’s because content companies need to generate far more revenue by broadening their audiences at significantly reduced costs.
“Companies have been priced out of bringing their entire content catalogs to market due to the extremely high costs of captioning and localization,” Huidor said.
The traditional transcription process involves an individual transcriber listening to a piece of audio and manually converting every audio element they hear to text. It is clearly very labor intensive using trained specialists, and costly.
But it does produce accurate results.
AI transcription eliminates the need for a human transcriber and relies instead on automatic speech recognition technology. ASR uses language and learning models to interpret human speech and convert specific sounds (or phonemes) as written language.
Some of the most popular speech-to-text software is provided by Google, Azure, IBM, and Dragon Professional.
The upside of using automated transcription is the ability for companies to scale more of their output, to keep pace with huge global demand and to slash the costs of the whole exercise.
The main downsides, as outlined by Vitac, are inaccuracy. AI system tend to deliver poor quality results when the input recording is poor, when there are more than one speaker and when the audio contains a substantial amount of overlapping speech. Other factors that can inhibit the AI’s ability are when speakers have diverse accents or dialects.
“All these variables can substantially impact AI’s ability to interpret and represent the audio of a recording and result in a final transcript containing a substantial number of errors,” Vitac says.
Its prescription to achieve “exceptionally high rates of accuracy” is to match automation with human experts. Not coincidentally this is exactly the service it offers.
Broadcasters and publishers are a little reticent to rely on AI transcription given that tools to date have not proved fool proof. The BBC, for instance, values the trust that viewers put in the veracity of its output more than most broadcasters. It also faces increasing pressure to cut costs. It is exploring and evaluating AI tools which is a route that it advises others to follow.
Vanessa Lecomte, localization operations manager at BBC Studios, telling language information site Slator that for all the benefits that AI has in localization, it “must match BBC’s quality standards at a minimum.”
She said, “The main question is whether AI can improve current processes, increase speed to market, and reduce costs.”
Lecomte advised balancing opportunities against the risk. “These technologies offer the potential to speed up the process, which in turn enables you to localize more content, reach new markets, but it shouldn’t be done to the detriment of quality or of a well-respected industry. So do the right thing and commit to a thoughtful localization strategy.”
The BBC is also addressing AI in dubbing using synthetic voices. Lecomte described the current dubbing process as “time-consuming and expensive involving many technical and creative talents.” She said her division is exploring the capabilities of AI dubbing technology to try and deliver more content, faster, and still meet quality standards, adding that this should be done acting responsibly in regards to talent rights.
Anton Dvorkovich, CEO & Founder of Dubformer, also flagged the industry responsibility of establishing regulations around the ethical use of human voices.
He also believes AI dubbing is “poised to dramatically transform the media industry…with solutions that cut production costs by 30-50%.
“For now, investors and the media are struggling with the challenge of evaluating new solutions. However, the focus is shifting to the potential costs of emerging tools and their impact on the media industry,” he wrote in an op-ed for Streaming Media.
Solutions range from those like Papercup and Deepdub where humans finalize the AI-powered dubbing to “DIY translation tools” aimed at enabling freelance content creators to translate their videos with AI. One such solution, from Heygen, relies on natural-sounding speech synthesis and text-to-speech software developed by Eleven Labs.
He predicts that the introduction of an “AI Dubbing Manager,” or proof listener, tasked with fine-tuning AI dubbing systems or types of content. This role could include listening to the automatic voice overs to grasp cultural nuances, refine voice modulation, and make corrections. Some actors and interpreters may transition into this profession as it evolves, he suggested.
There could be Creative Directors for AI-enhanced productions to guide creative content developed through AI dubbing while the market for actors to license their AI-generated voices will grow. “More tools will enter the market, enabling individuals to generate their voices with AI. Actors will be able to create new voices based on their own.”
Software developer Enco introduced AITrack and ENCO-GPT, which both use ChatGPT to generate language responses from text-based queries for automated TV and radio production workflows.
AITrack, for instance, integrates with Enco’s DAD radio automation system to generate and insert voice tracks between songs. It leverages synthetic voice engines to produce natural-sounding, engaging content between songs.
ENCO-GPT could be used to condense a lengthy written news article into a few sentences, or inject breaking news updates within live ad breaks or automatically creates ad copy on behalf of sponsors.
Company president Ken Frommert sees an opportunity to go bigger with both solutions. “We see opportunities to convert a morning or afternoon drive radio show into a short-form podcast, or summarize an 11:00 p.m. local news program for the TV station’s website…. It offers a seamless way to publish content in diverse forms.”
LEXI Recorded, a VOD automated captioning solution from Australian firm AI Media, claims 98% accuracy, “comparable to human captioning,” and even higher with the use of custom dictionaries or topic models. Its use is priced from 20 cents per minute.
“We are not just meeting but exceeding the demands for high-volume, quick, and precise captioning of recorded content,” said AI-Media’s Chief Product Officer, Bill McLaughlin who will present the product at NAB Show in April.
Captions offers an AI-based video editing app and a solution for automatically generating subtitles. Both products are aimed at content creators and marketers.
It also offers an in-house voice cloning tool trained on licensed audio recordings to translate users’ audio into 28 other languages or use an AI voiceover to narrate the content from scratch.
Gaurav Misra, CEO and cofounder says Captions’ approach to video editing software is different because its tools are designed for specifically editing talking videos. “Most video production editing is focused more on aesthetics like filters and colors, whereas our focus became more about conveying an idea or experience,” he told Rashi Shrivastava at Forbes.
Vitac’s claims its own AI captioning solution, Verbit Captivate, stands apart from “generic” ASR engines in being designed, developed and built, inhouse. “Whereas other AI captioning vendors either provide an engine or a service, Vitac is unique in that we own both. And because of that, we can change, update, upgrade, and customize customer offerings, tuning our solutions to individual customer needs, creating an offering that achieves accuracy and results on a personal level.”
Additionally, it pairs the tech with “human backup” — specialists who boost performance with prep, pre- and post-session research, and live-session monitoring.
Cineverse’s MatchCaption, targets bulk film, television and video libraries localization “at significant scale.” It claims its generated captions are “perfectly timed and formatted according to industry standards, then auto converted into multiple caption/subtitle formats, to meet the specifications of all streaming platforms.
It also claims its system can complete the same tasks which currently cost content owners $60-100 for less than $10 per program hour, “and a full feature film can be completed, and quality checked in less than one hour — an 85% reduction in cost and 90% reduction in time.
OpenAI’s Sora: It’s the Beginning or the End of Video and Either Way It’s a Big Deal
TL;DR
OpenAI shared a first glimpse at new tool Sora that instantly generates videos from just a line of text.
The apparent capabilities of Sora are deemed perfect for stock footage, presentations, and commercials with developments likely to lead to longer form films.
AI tools that can generate videos indistinguishable from video shot with a real camera raise concerns again about content creation industry jobs and misuse by spreading deepfakes
OpenAI seems to delight in pulling rabbits from a hat and was more than aware of what its latest research project would do when it alerted the internet.
Everyone’s gone wild for Sora, a new diffusion model being tested which can generate one minute video clips from just a single text input. To prove what it can do OpenAI dropped some videos online generated by Sora “without modification.” One clip highlighted a photorealistic woman walking down a rainy Tokyo street.
The Sora prompt for this video was: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.
“Every single one of [them] is AI-generated, and if this doesn’t concern you at least a little bit, nothing will,” tweeted YouTube tech journalist Marques Brownlee. “This is simultaneously really impressive and really frightening at the same time,” he added on his YouTube channel.
A blog post on the website of nonlinear editing software Lightworks declared, “Sora’s almost magical powers represents yet another seismic shift in the possibilities of content creation.”
“Sora is a glimpse into a future where the lines between creation, imagination, and AI blur into something truly extraordinary,” says Conor Jewiss at Stuff.
Benj Edwards of Ars Technica thinks OpenAI is on track to deliver a “cultural singularity” — the moment when truth and fiction in media become indistinguishable.
“Technology like Sora pulls the rug out from under that kind of media frame of reference. Very soon, every photorealistic video you see online could be 100 percent false in every way. Moreover, every historical video you see could also be false.”
What has excited the AI and artistic community so much is the cinematic photorealism of the videos produced by OpenAI’s algorithm which seems “to understand how things like reflections, and textures, and materials, and physics, all interact with each other over time,” said Brownlee.
In its research paper Open AI states the model deeply understands language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions.
Sora can also create multiple shots within a single generated video that accurately persist characters and visual style.
OpenAI further states it is teaching the AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.
Two videos in particular grabbed attention. “This is one of the most convincing AI generated videos I’ve ever seen, says Brownlee of a video made with this text prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”
“This looks like it could be an actual film trailer,” says Theoretically Media’s Tim Simmonds.” I mean that there’s nothing really in here to majorly indicate that this is AI generated.”
The other, featuring an aerial flyover, was spun-up from the prompt: “Historical footage of California during the gold rush.”
“The drone footage of an old California mining town looks really, really pretty great,” Simmonds says. “And even as the camera makes this turn here, the buildings stay intact, they don’t start to shift and warp and morph into weird things.”
Brownlee thinks it demonstrates “all sorts of implications for the drone pilot that no longer needs to be hired, and all the photographers and videographers whose footage no longer needs to be licensed to show up in the ad that’s being made,” he says.
“It’s also very capable of historical themed footage,” he adds. “This is supposed to be California during the gold rush. It’s AI generated but it could totally pass for the opening scene in an old western.
Which begs the inevitable question, How long until an entire ad with every single shot is completely generated with AI? Or an entire YouTube video, or an entire movie?
Simmonds still thinks we are a way out from that “because [Sora] still has flaws and there’s no sound [no audio/dialogue sync] and there’s a long way to go with the prompt engineering to iron these things out,” he says.
Naso agrees that Sora “could change the game for stock footage,” adding that the next stage for AI prompt filmmaking is dialogue-based scenes. “So far, these examples are more like b-roll.”
Nonetheless, even at the pace of AI development it seems OpenAI has caught everyone napping.
Rachel Tobac, a member of the technical advisory council of the Cybersecurity and Infrastructure Security Agency (CISA), posted on X (formerly known as Twitter) that “we need to discuss the risks” of the AI model.
“My biggest concern is how this content could be used to trick, manipulate, phish, and confuse the general public,” she said.
OpenAI also says it is aware of defamation or misinformation problems arising from this technology and plans to apply the same content filters to Sora as the company does to DALL-E 3 that prevent “extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others,” as Aminu Abdullahi reports at TechRepublic.
Others flagged concerns about copyright and privacy, with Ed Newton-Rex, CEO of non-profit AI certification company Fairly Trained, maintaining: “You simply cannot argue that these models don’t or won’t compete with the content they’re trained on, and the human creators behind that content.”
Anticipating these concerns, OpenAI plans to watermark content created with Sora with C2PA metadata. However, OpenAI doesn’t currently have anything in place to prevent users of its other image generator, DALLE-3, from removing metadata.
OpenAI said it is engaging with artists, policymakers and others to ensure safety before releasing the new tool to the public. However, its get-out clause is that despite extensive research and testing, “we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.”
The Microsoft-backed company is valued at $80 billion after a recent injection of VC funds. “It will become impossible for humans to detect AI-generated content by human beings,” Gartner analyst Arun Chandrasekaran warned TechRepublic. “VCs are making investments in startups building deepfake detection tools, however, there is a need for public-private partnerships to identify, often at the point of creation, machine-generated content.”
Sora joins a chorus of other text to video generators such as Runway and Fliki, the Meta Make A Video generator, and the yet-to-be-released Google Lumiere.
Question: Has Apple taken its eye off the ball? Answer: Maybe not. Its researchers have just published paper about Keyframer, a design tool for animating static images with natural language.
As Emilia David at The Verge points out, Keyframer is one of several generative AI innovations that Apple has announced in recent months. In December, the company introduced Human Gaussian Splats (HUGS), which can create animation-ready human avatars from video clips. Apple also released MGIE, an AI model that can edit images using text-based descriptions.
Led by Google Lumiere, a new wave of AI video generators coming to market bring long form-quality narrative video one step closer.
February 20, 2024
AI Video Generators: Is This the Year We’ll See Feature Films?
TL;DR
There is a new wave of AI video generators coming to market that bring long- form quality narrative video one step closer.
Google Lumiere is a new video AI model still in beta mode that uses space-time diffusion to generate more coherent and realistic videos.
But we’re not there yet. Despite the incredible leaps the video AI models have considerable limitations; while using them without knowledge of their training data could open the user up to legal challenges.
Google Lumiere may yet to be released but the company has released video clips it says were created by the new technology. Reviewers have gone wild.
AI video generation has gone from uncanny valley to near realistic in just a few years and the latest wave of tools, including OpenAi Sora and Google Lumiere, brings the prospect of coherent AI-generated features a step closer.
YouTuber Matt Wolfe predicts that 30-60 minute long completely AI-generated films “that are coherent and enjoyable” are coming in the next few months.
Calling the news a “bombshell,” Tim Simmons, owner and founder of AI commentator Theoretically Media, notes that Lumiere isn’t perfect but it’s a startling advance nonetheless, catapulting Google into the front ranks of AI video generators.
“I don’t think I’ve seen before this [idea] you can give the model a reference image and then it will generate videos in the style of that reference image,” Wolfe says.
That’s because Google has taken a different approach to its model.
As The Verge explains, Lumiere uses new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time).
“Other models stitch videos together from generated key frames where the movement already happened, while STUNet lets Lumiere focus on the movement itself based on where the generated content should be at a given time in the video,” says The Verge reporter Emilia David.
Lumiere starts with creating a base frame from the prompt. Then, it uses the STUNet framework to begin approximating where objects within that frame will move to create more frames that flow into each other, creating the appearance of seamless motion.
Lumiere starts with creating a base frame from the prompt. Then, it uses the STUNet framework to begin approximating where objects within that frame will move to create more frames that flow into each other, creating the appearance of seamless motion.
“By handling both the placement of objects and their movement simultaneously,” Google claims Lumiere “can create consistent, smooth and realistic movement across a full video clip,” reports Ryan Morrison at Tom’s Guide.
Or as Simmons puts it, “Basically, it all comes down to this space time unit which allows for the video to be created all at once, as opposed to other models which begin with an input frame, an output frame and then generates key frames between those. [With Lumiere] the video is generated all at once.”
Beyond text-to-video generation, Lumiere will also allow for image-to-video generation, stylized generation, which lets users make videos in a specific style, cinemographs that animate only a portion of a video, and inpainting to mask out an area of the video to change the color or pattern.
It can generate 80 frames at 16fps — or five seconds of video — putting it on par and even ahead of its competitors. But Google’s research paper describes “a new inflation scheme which includes learning to downsample the video in both space and time,” which Google says can pave the way to longer (suggesting even “full-length”) clips.
CineD carries an overview of AI video generators to highlight their current capabilities and limitations. This excludes Lumiere but includes leaders like Runway’s Gen-2, Pika 1.0 and Stability AI’s Stable Video Diffusion (SVD).
After testing, Mascha Deikova concludes that AI video generators haven’t yet reached the point where they can take over our jobs as cinematographers or 2D/3D animators.
“The frame-to-frame consistency is not there, the results often have a lot of weird artifacts, and the motion of the characters does not feel even remotely realistic,” she says.
Deikova also finds the overall process still requires “way too much effort” to get a decent generated video that’s close to your initial vision. “It seems easier to take a camera and get the shot that you want ‘the ordinary way,’” she says.
“At the same time, it is not like AI is going to invent its own ideas or carefully work on framing that is the best one for the story. Nor is that something non-filmmakers will be constantly aware of while generating videos.”
There are also other limitations that users of AI video generators should be aware of so that they don’t fall foul of the law. For example, as noted by Deikova, SVD doesn’t allow using its models for commercial purposes. You will face the same issue with Runway and Pika on a free-of-charge basis. At the same time, once you get a paid subscription, Pika will remove their watermark and grant commercial rights.
Lumiere is not available for independent testing, and there is no word as to Google’s timeline for potential deployment. This is perhaps because, as Google’s Lumiere paper noted, “there’s a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases to ensure a safe and fair use.”
In CineD’s roundup, it warns that nobody knows what data most of AI video models were trained on. Most possibly, it suggests, the database consists of anything to be found online — therefore images and works of artists who haven’t given their permission nor have received any attribution.
One company forging a different tack is Adobe with its content credentialled image generator Firefly, trained on sources that either Adobe owns or that artists have given the company permission to use. The company is also developing a video AI generator but has yet to release it.
As of today — and we really mean, like right now — text or image-to-video generators could become a quick solution for ideation (storyboarding), previsualization and animation.
Bu applying visual storytelling tools and “crafting beautiful evolving cinematography” is, right now, one for human minds.
For short films with a decent story we’re probably looking at 18 months to two years out, estimates Tim Simmons of Theoretically Media, speaking on the Curious Refuge podcast. For cinematic quality — equivalent to something you’d watch in a movie theater — we could be only three years away, he predicts.
“These innovations are happening really fast and what I really think is important is for very talented storytellers to take these tools, and to craft their own stories. It’s always going to be a back and forth process but very different from the normal creative process you may be familiar with.”
“AI is elevating everyone into the position of curator to where you can pick through a wide variety of outputs, and select the best one for which whichever story you’re trying to put together,” Caleb Ward asserted on the Curious Refuge podcast. “Essentially, your creative potential has now been multiplied by 50 times using artificial intelligence.”
He noted that, in the past, artists would spend a lot of time mastering software. A big part of their artistic endeavor was figuring out how to master software like Adobe After Effects and Illustrator or Houdini.
Now, however, there are a variety of AI tools to select from — general AI video generators to specific tools for specific jobs. The role of the artist is changing to one in which the skill is knowing which ones to use to achieve the vision.
“So essentially, it’s going to be more important than ever before for everybody to have their own creative taste, and to be able to disseminate from all of the outputs that AI is giving you,” Ward said. “It’s like having a team of creative assistants that are working for you. But that team is also just a little hung over. And so you have to be able to push them in the right direction.”
Shelby Ward, on the same podcast episode, makes the point that AI tools are being developed so rapidly that their output will soon be indistinguishable from Hollywood content. It therefore could democratize the whole content creation industry.
“Think about the types of people who are able to create compelling content, films and stories,” he says. “You don’t have to be networked in Hollywood now to create a film that’s really interesting. And you don’t have to be born into a certain socio-economic status. Really, anybody with average creative tools has the power to create something really, really interesting. And that barrier to entry is only going to shrink and the quality is only going to get better.”
OpenAI shares a first glimpse at its new generative AI text-to-video tool Sora, which instantly generates videos from just a line of text.
February 26, 2024
Posted
February 20, 2024
Generative AI Could Impact Hollywood Like… Well, Like the Invention of the Camera
TL;DR
Attention is turning to how AI will not just be used in production but how it will transform every aspect of storytelling.
As storytelling becomes more personalized and interactive, films will change and so will gaming, an industry where people can choose their own adventures more easily than moviegoers can. The amount of entertainment available.
Just as the arrival of the internet led to an explosion of user-generated content posted to social media, generative AI will accelerate the creation of video content online.
The first claimed long form feature derived from a single text prompt has already been released — and made money.
The new frontier of artificial intelligence is text-to-video and, though it may be a few years before a blockbuster is produced entirely by AI, it seems incredible to be even reading words barely a year after the first text-to-image generative models were launched.
The first generative models to produce photorealistic images exploded into the mainstream in 2022 — and soon became commonplace. Tools like OpenAI’s DALL-E, Stability AI’s Stable Diffusion, and Adobe’s Firefly flooded the internet with jaw-dropping images.
Runway, a startup that makes generative video models (and the company that co-created Stable Diffusion), released its latest version, Gen-2, with a quality that is striking, says Will Douglas Heaven in a 2024 trends piece for the MIT Technology Review. “The best clips aren’t far off what Pixar might put out,” he gushed.
Just as the arrival of the internet led to an explosion of user-generated content posted to social media, generative AI will accelerate the creation of video content online. Some predict that as much as 90% of online content will be AI-generated by 2025.
“As storytelling becomes more personalized and interactive, films will change and so will gaming, an industry where people can choose their own adventures more easily than moviegoers can. The amount of entertainment available will also balloon.”
Film historian David Thomson has compared GenAI to the advent of sound. “When movies were no longer silent, it altered the way plot points were rendered and how deeply viewers could connect with characters,” notes Suich Bass. Meanwhile Cristóbal Valenzuela, who runs Runway, says AI is more like a “new kind of camera,” offering a fresh “opportunity to reimagine what stories are like.” Perhaps both are right.
There is already one filmmaker claiming to have made the first feature-length film from a single long-form prompt.
At the end of last year artist Dan Sickles released a new version of the classic black and white documentary Man With A Movie Camera. Made in 1929 by Dziga Vertov, it captured a day in the life of Russia’s citizens and used a number of groundbreaking techniques.
From “Man With AI Movie Camera”
Sickles has used AI to generate 480 unique iterations of Vertov’s original film in what he calls a homage to — and interrogation of — the original masterpiece, TV Tech’s Phil Kurz reports.
Each iteration of Man With AI Movie Camera was generated from a prompt created by the artist that describes Vertov’s original film shot-for-shot with the exact timing to match the frame. The prompt is trained on a data set curated by the artist to give each iteration a distinct aesthetic while retaining the length and essence of each shot to mirror the original film.
The series was created in using Stability AI’s open-source models (Dreamstudio, ClipDrop and Stable Audio and are being sold online via the NFT marketplace SuperRare. It grossed more than $25,000 when the first sale went live mid-December. Each of the works in the series will be revealed individually throughout 2024.
Sickles said his project “serves as a model for how AI can function as an equitable public good for creative production.”
Outside of artworks and experiments, could these new video generating AI tools actually serve up a feature film that might give Marvel a run for its money?
It’s no surprise that top studios are taking notice. Paramount and Disney are both exploring the use of generative AI throughout their production pipeline.
In fact, AI presents bigger questions about the future of stories and the nature of collective storytelling. For example, poses Alexandra Suich Bass, will Gen-AI simply imitate previous hits, resulting in more derivative blockbuster films and copycat interpretations of pop songs that lack depth, rather than original stories and art forms?”
And as entertainment becomes more personalized, will there still be stories that become part of humanity’s collective consciousness and move large numbers of people, who can talk about them together?
Amid fears over the use of generative AI in Hollywood, artificial intelligence seems less likely to replace humans than to assist them.
February 27, 2024
Posted
February 20, 2024
XR, MR, VR, and AR Could Be All a Thing (Again) (Maybe)
TL;DR
No matter that Apple’s Vision Pro is lauded for having the highest fidelity visuals and most comfortable head wearing VR experience yet, no-one and not even Apple is sure what it is for.
The biggest barrier to any new technology is figuring out the best way to use it. Apple Vision Pro offers 3D, spatial and immersive features. It’s up to early adopters to figure out the best practices, implementations, and creativity to make killer apps.
Those companies who dive in and create spatial computing apps for the Vision Pro today give themselves a competitive advantage and position themselves as a leader in their industry.
When Apple’s Vision Pro was released, dozens of brands had lined up to experiment with native applications written in the operating system dubbed visionOS. It’s early, but none appears to be the revolution in immersive content that some have heralded.
As Scott Simmons bluntly put it in his review of the headgear on the Art of the Frame podcast: “The killer app that will make the masses strap this thing onto their face doesn’t exist right now.”
Nonetheless, these apps do give us a glimpse into how spatial computing could change business, customer service, entertainment, and work.
First, note that Simmons finds little use in working with nonlinear editors Final Cut or Adobe Premiere Pro in the new 3D environment.
“Why would I ever want to do that in the Vision Pro when I’ve got two giant monitors right here on my desktop? I can see no benefit of an app like that in the Vision Pro. Maybe that’s not what it’s really meant to be for. I still go back to the fact that you’ve got to put this thing on your head. And it’s always going to be a niche product because [of this].”
For Apple, too, the Vision Pro is also about testing the waters.
The company wouldn’t expect the first version of the system to fly off the shelves (certainly not with a $3500 price tag). But everyone expects that price to at least halve in the future — not to mention for Apple to develop a product that is more comfortable, less obtrusive and has a longer battery life; then, there will be wider consumer acceptance. (Do we even want that to happen?)
Also on The Art of the Frame, Katie Hinsen likens the product to the iPod before the iPhone: “It wasn’t quite the thing. And I don’t think Vision Pro is the thing for the mass market. I mean, they had to build whole factories just to build each individual custom part just for this thing. But it is the thing that allows us to start figuring out what we’re going to do with it. And people are going to start building apps, start building content. And I think by version three, we’re going to start seeing what the thing’s going to look like.”
Upgraded Digital Experiences
Home improvement firm Lowe’s is among the first to introduce a native Vision Pro app. It uses the illusion of depth experienced by the headset’s wearers to “walk around” properties they might like to own. In a sense, this is nothing particularly new, since virtual reality tours have featured prominently in niche real estate marketing, particularly at the luxury end of the market, for years.
Another app, made by golf’s PGA Tour, includes a 3D spatial map of the par-3 seventh hole at Pebble Beach. The app uses 2D video and 3D-rendered models generated from real-time live-shot data captured during a tournament.
VR would seem ideal for fans at home to better experience the topography of a golf course and how pros play shots, but it’s not as if these golf apps haven’t been tried before. “We were able to take the learnings from past AR/VR/XR experiences and put them into practice with PGA Tour Vision,” said Eric Hanson, PGA vice president of digital product development, in a press release.
Hinsen predicts similar experiences are in the works for concert movies. With Disney+ available on Vision Pro, users could soon watch Taylor Swift’s Era’s Tour concert video; imagine if the tour were recaptured using volumetric (depth-based) cameras!
“People have been investing in live entertainment, not just sports, but also concerts, and with volumetric capture, for this ability to really be immersed,” Hinsen says.
Now, how much would fans pay to be in the front row — albeit virtually? “Would you pay $100 to be at the front row of a Taylor Swift concert? Or a F1 Grand Prix? Or an NBA game?” Hinsen wonders.
Ad(app)ting to visionOS
As reported by Cathy Hackl in Harvard Business Review, adapting regular iOS apps to the Apple Vision Pro from mobile “is no simple task.” Apple offers a new platform, visionOS Simulator, to develop apps for the Vision Pro; developers have the option to develop in windows, volumes, or spaces.
“This creates an all-new dimension for developers to consider when creating applications for the Vision Pro,” says Hackl.
But there are challenges. These range from designing for hand-gesture input all the way to gaps in team-based knowledge. And as far as hardware, developers will need a Macbook Pro with at least an M2 Apple Silicon chip — not to mention the cost of the Vision Pro itself.
“Companies will need to decide whether they’re ready to embrace native 3D, spatial content, and whether investing in an app for the first Apple spatial computing device will be a net benefit for R&D,” Hackl says.
Hackl says the upfront costs and investment in development teams should be considered when choosing to plan, develop, and market an app for the Vision Pro.
As with any new technology, early adopters can get a head start in learning what works — and how it works — and stand a chance to gain market share on the new platform.
But even Hackl, an advocate for Apple’s product, says “businesses who wait and learn from the first iteration of the Vision Pro may make up for a later entry with established best practices, seasoned developers, and a larger user base of Vision Pro owners.”
She also warns against platform lock-in. “Developing an app exclusively for VisionOS could mean the app is locked into the Apple ecosystem. Other head-mounted displays like the Meta Quest exist and say they will slowly start supporting spatial video, but rollout for support might be slow. Business professionals should consider where their target audience is and where they will start to migrate to.”
Over time, perhaps key audiences will be on multiple devices and spatial computing experiences may become cross-platform.
That’s a lot of “Ifs.” A wait-and-see approach seems advisable.
Despite AR’s lack of sustained success in the enterprise market to date, many new apps for the Apple Vision Pro target business users.
February 18, 2024
How Apple Makes the “Business Case” for Vision Pro
TL;DR
Many of the 600 new spatial apps designed for the Apple Vision Pro target business users, despite augmented reality’s lack of sustained success in the enterprise market to date.
Despite headset advancements, scalability issues (cost and accessibility), plus the physical implications of being in a headset all day, mean the 2D web will continue to dominate immersive experiences in the enterprise market.
The jury is out on whether people will want to be in a hyper-surreal liminal world that is neither reality nor virtual reality for sustained periods.
Although Apple Vision Pro has generated a lot of marketing mileage out of its promise to supercharge entertainment experiences from sports to gaming, movies and shopping, the company is paying at least as much attention to its application in the virtual office.
Indeed, Apple is launching the Vision Pro with 600 apps designed for its spatial computing environment, many of which are intended for the workplace.
It’s a strategy that signals the headset as a new type of general computer fit for consumer and corporate use at the same time as Apple seems not to know what VR/XR or mixed reality applications will actually prove most popular, or ideally game-changing enough, for the unit to go mass market.
CEO Tim Cook emphasized that he was seeing lots of interest in the enterprise market during Apple’s earnings call. “Leading organizations across many industries such as Walmart, Nike, Vanguard, Stryker, Bloomberg and SAP have started leveraging and investing in Apple Vision Pro as their new platform to bring innovative spatial computing experiences to their customers and employees,” he said.
Cook cited ideas for business like everyday productivity, collaborative product design and immersive training. As TechCrunch’s Ron Miller noted, the ability to have a so-called “infinite desktop” is key to the productivity argument: “Users can open multiple programs and move them around a huge palette that gives new meaning to extra screen real estate. There’s also the ability for people to build their own apps. As a developer ecosystem begins to emerge, we will start to see apps created from scratch, some for consumers and some for businesses, that have been specifically designed for this paradigm.”
Among the 600 new spatial experiences in the App Store built with the visionOS, there are apps like Box, for managing files and 3D objects, a MindNode app that helps users brainstorm with thought bubbles that float around a user’s space, and OmniFocus and OmniPlan for data and project management visualization.
XR Today highlights six enterprise apps, including training platform SynergyXR, digital twin software JigSpace, and graphic design tool Da Vinci Eye.
Adobe has also ported versions of its creative software including Lightroom to VisionOS, although early research users who have trialed the headgear, like Stephen Shankland at CNET, suggest that field service, training and customer experience are the top use cases for corporates.
IDC analyst Ramon Llamas told TechCrunch, “About 56% use it for training, about 44% for customer facing retail experiences, and 43%-44% for collaboration.”
He added, “When you have enterprise users coming back and saying that they want to get their hands on it, I think that speaks to Apple’s abilities to court the enterprise user with this device.”
Alex Howland, co-founder of virtual environment platform Virbela, thinks that virtual networking events and conferences will become commonplace, enabling professionals to connect, learn, and build relationships in a virtual environment. Writing for Fast Company, he also thinks that advanced haptic feedback of the type enabled by Vision Pro, will allow users to feel and interact with virtual objects for a more immersive and realistic experience.”
However, Apple, — like Meta and Google before it — may have a task on its hands convincing companies to adopt the product at scale.
Howland believes Web-based VR is likely to play an increasingly important role as companies look to these new technologies but there are issues.
“Despite headset advancements — the issues of scale (cost and accessibility), plus the physical implications of being in a headset all day — the web will continue to dominate immersive experiences in the enterprise,” he says.
TechCrunch’s Miller agrees: “For now, Apple’s entry is cool to experiment with, but it’s not clear whether people want to wear a device on their face for hours at a time, no matter how good the interface design may be,” he says.
Some commentators, like venture capitalist Sanjit Singh Dang, writing for Forbes, herald the device as “a gateway to a future where digital and physical realities are seamlessly integrated” whereas others are less sure about how well those boundaries are united.”
Ian Bogost, writing about his experience in The Atlantic, says, “I did feel like I’d been turned into a robot person of some kind. Is that what the creators of these goggles hoped for, or was it just what I expected? If the Apple Vision Pro wants to reconcile life outside the computer and life within it, the challenge might be insurmountable.”
Apple is aware that users might need a little education into the brave new world of Xtended Reality. The company is flying XR experts to US retail stores to prepare employees with a 25-minute demo and onboarding sessions to help guide potential customers towards the product.
Tom Carter, CEO and co-founder of Ultraleap, told Rory Greener at XR Today, “Users want to feel less strain when using XR devices and what sets the Apple Vision Pro apart from other products on the market is an emphasis on a hands-first user interface with your arms in a relaxed position: a pinch of the fingers controls how you view and interact with the content.”
Bogost was more comfortable using the device as a straightforward entertainment device — watching movies, for example, which blocked out real world intrusion and therefore prevented nausea.
Appropriately, he chose to watch Avatar: The Way of Water. “It’s like watching the biggest, brightest television you’ve ever seen, at the proper distance, in a dark room,” was his verdict, “spectacular, so long as you can tolerate the headset’s weight on your face for hours.”
He concedes that Apple may well have delivered a new paradigm for general computing and entertainment but notes, as others have, that future versions will only improve.
“The original Macintosh was only marginally useful, and the first iPhone didn’t do that much,” Bogost says. “With Vision Pro, the company is trying to reconcile, once and for all, the digital and physical worlds. Apple has probably done enough, even in this early iteration, to convince many users that such a bridge can and will be built.”
Apple Vision Pro’s Proposition: Live in a Digital World, Not the Real One
TL;DR
If we’re moving toward a new era of living more deeply intertwined with our digital and physical worlds, what might the implications be?
Some people believe Apple Vision Pro can truly merge the digital and material worlds together, but others think it only succeeded in putting the conflict into higher resolution.
This is another round in the techno-cultural war between Silicon Valley elites obsessed with biohacking their way to immortality and those who think any digital interface divorces us from the very thing that makes us human — physical connection and collaboration.
The launch of mixed reality device Vision Pro has social media commentators and armchair philosophers pondering what it means for us all on an existential level. That’s because Apple’s vision is predicated on a collapsing of the virtual with the real in a more substantive way than anything before — and Apple has a strong track record in leapfrogging the future by birthing new forms of computing.
If Apple were to succeed in attracting enough of us to buy into the concept for everything from daily emails to watching movies and doing the shopping, “it signals a significant sociocultural moment for humans,” digital anthropologist Giles Crouch writes on Medium. “It indicates that, as a society, we are indeed interested in becoming more entrenched in living in two worlds simultaneously, more so than ever before.”
Is Digital Life Compatible With Humanity?
The implications of “living in two worlds” are explored by Crouch and others and not many like what they think they’ll find.
“Digital-device culture is an experiment on a colossal scale, the results of which we have tried to measure in IPOs, quarterly growth rates, engagement metrics, and daily active users, not in human flourishing,” product designer John Fechtel writes in The New Atlantis. “But that is where we are incurring the real costs.”
Among their points is that Apple is inviting users to exist for large periods of time in a pseudo-reality, where its goggles actually block our view while cameras pipe video of our surroundings into our eyes. No one it seems is quite sure what psychological effect this will have on us.
“I had no trouble getting up from the couch to grab my laptop from the other room, but the world jittered in the display, sprouting fuzz around its edges,” describes Ian Bogost, who had a go with the Vision Pro for The Atlantic. “Objects throbbed in time with my footfalls. Every jostle or cough made reality shudder. This was a rapidly updating computer-desktop-background version of the world, rather than a view of things as they really are.”
Exclusive to the Vision Pro, Apple is touting a new form of 3D with 180-degree 8K resolution video and spatial audio it calls “Apple Immersive Video.” Users themselves can record video of the spatial environment they see in what Apple’s marketing blurb suggests is akin to recording memories.
Black Mirror fans might recall a typically on-point and dystopian fable on the subject in the episode, “The Entire History of You,” which was released 13 years ago.
“The idea that family videos might be made into more perfect stand-ins for our lived experience suggests the grander Apple vision,” as Bogost puts it.
But what’s so wrong with the physical world anyway? Or rather, what’s so wrong with our physical bodies that we feel the need to “augment” it with digital wearables for a better experience?
That’s the central point of a musing by technology theorist L. M. Sacasas and the essay by Fechtel, who sets the scene by describing what users see when wearing the Vision Pro.
“The feed populates your physical space with digital interfaces, taking the type of prompts you would normally find on your computer home screen and laying them over your actual home,” Fechtel writes. “There is even a dial that lets you adjust how much of the real reality you want to see.”
Noting that users are able to open an app “just by glancing at it,” Fechtel says Apple wants to bring us ever closer to its entertainment and social environments by steadily reducing our physical interactions with the device.
“The hope is that soon you will hardly have to do anything to the device at all. Instead, it will watch your eyes, face, head, and hands to intuit your meaning.”
Removing friction from user experience is the way Apple has successfully won market share and will continue its dominance, so further refinements are in the cards.
The end game is that “the interface is no longer the device itself, but our bodies and the space around us,” Fechtel says.
But consider just how intimate this interface is, he warns. “To detect what you want to do, Vision Pro has to place an extensive sensor array right up next to your eyes to gauge their movement. To give you a fully immersive video feed, it has to fully cover your eyes. This enhanced-reality approach has strange consequences,” he says.
“For example, when you FaceTime someone with Vision Pro, your interlocutor can no longer actually see your face. So the device has to synthesize your mood, intent, interest, and attention, and represent you to the other person as a realistic digital avatar.”
The result is that the closer the device gets to realizing what you really want, “the closer it must draw to your person. And the closer it draws to your person, the more it cuts you off from the world.”
Not Ready, Player One
This is the same type of techno-fear that greeted Mark Zuckerberg’s Metaverse and as realized in the dystopian novel Ready, Player One: People with machines strapped to their face experience the dopamine of the virtual world, which is presented as an escape from the poverty of actual life.
On one level, how will we feel about people wearing mixed reality glasses in social settings when AR, combined with AI and other tools, may be perceived as offering some form of social advantage.
“Will we be suspicious of someone recording events that could be used for public shaming or, even more insidiously, blackmail at a later date?” poses Crouch. “Does it create more of a digital divide with those who don’t have AR glasses and devices feeling less equal and therefore missing out?”
Sacasas’s view is best summed up as “the worst thing about the age of digital media will turn out to be how hard it became to look one another in the eyes.”
He, too, is interested by the idea — voiced by cultural critic Wendell Berry — that the next great division of the world will “be between people who wish to live as creatures and people who wish to live as machines.”
He asks why we have a tendency to embrace technology at all and concludes:
“In certain contexts, machines can operate at a pace, scale, precision, and intensity with which creatures cannot compete. When machine-like consistency, efficiency, speed, or production is demanded of creatures, then creatures are made to live as if they were machines,” Sacasas says.
“This never ends well for human creatures. Most people know this, it’s just that some see this as cause to transcend the human and others see it as cause to re-imagine the human-built world.”
Sacasas continues, “We have been schooled by dominant social structures to presume that the limitations inherent in our embodiment are merely obstacles to be overcome rather than the parameters within which meaningful, satisfying, and purposeful lives might unfold.”
Like Fechtel, he laments the prop of the machine to augment reality because doing so only succeeds in excluding the actual tactile and genuine and “beautiful” experience we have raw via our own senses.
He uses as a provocative example the image posted by tech analyst Anshel Sag in “dad mode,” wearing the Vision Pro to watch Avatar while cradling his newborn baby. It’s already divided the internet.
Sacasas is not aiming his ire only at Vision Pro. He finds this the latest in a long line of tech — from mobile phones to laptops and autonomous cars — offering ways we might isolate ourselves from the world.
Scathingly, he writes of anyone thinking of using wearables to live in: “Increasingly captivated by virtual worlds, I am less likely to demand anything more or better. The tools that diminish my capacity to experience the world in full simultaneously promise to give me a better-than-real world.”
The more pressing concern is whether a growing enthrallment to devices like these “precludes an affective attachment to the world,” he worries. “And to the degree that our virtual worlds are bespoke realities ostensibly constructed for us, whether they also deprive us of an experience of a common world, one we share with others, thus accentuating rather than alleviating an experience of alienation and isolation.”
Logically, the next step on the march to reducing friction between our bodies and our digital twin is some form of mind meld with the machine. Brain implants are already under way at Elon Musk’s lab Neuralink, receiving FDA clearance in May to begin its first human trial.
“Once we regard the body as an instrument, it can also become an interface, with implants serving as the input devices,” says Fechtel. “Together, the body and its integrated devices can overcome the physical and mental limitations, the so-called “user errors” that hold back our interactions with devices.”
The Pros of the Vision Pro Approach
Why might these technologies succeed, despite their invasiveness and despite the controversy and criticism surrounding them? One answer is that we are still limited by our interfaces.
Fechtel says, “After years of using touchscreen keyboards, our thumbs still butcher even simple sentences. Voice dictation was a logical next feature to offer, but it too is buggy. What if we could just think and have our intent acted upon? The perfect interface is no interface at all.”
What if we’re using AR glasses and have a brain chip implant? Does the chip help us to better manage the challenges of dealing with all that information and processing living in two worlds? Right now, we have no idea, says Crouch.
He says, “We are in the very early stages of bringing technology more closely to our bodies and augmenting our cognitive capabilities. What we don’t know yet, is if culture perceives this to be of enough value as a tool to advance the survival of our species.”
This is now becoming a techno-cultural battle between those elites in Silicon Valley biohacking their bodies on a quest for immortality and those who think the obsession with techno-transplants divorces them (and the rest of us) from the very thing that makes us human — physical connection and collaboration.
Fechtel thinks we need courage to fight back. “Most of us do not have a strong enough sense of the indispensability of the body to resist having yet another layer of ourselves peeled away and a device put on us instead. Most of us already have devices too close to us, and our digital persona feels too real, and too far from our physical life, to resist the closing circle of each new epochal device. We need to get comfortable in our own skin again. To do this, we need courage and optimism.”
The Zone of Interest opens by introducing viewers to a husband, wife and their family as they picnic by a river. The scene seems idyllic. Shortly after, the family drives home. Everything seems normal — but on closer inspection, their number plate is adorned with the insignia of the SS, the elite guard of the Nazi regime.
This scene encapsulates why The Zone of Interest is so unsettling. It depicts the everyday life of Auschwitz commandant Rudolf Höss (Christian Friedel), his wife, Hedwig (Sandra Hüller) and their family — yet, industrialized, genocidal violence moves along, continuously, in the background and periphery.
Through meticulous camera setup and editing, writer and director Jonathan Glazer’s film depicts scene after scene of the daily routine of Rudolf, Hedwig and their children. Just as the viewer is becoming immersed in these scenes — perhaps even disturbingly aligning themselves with these characters — the film jarringly cuts away.
It cuts to abstracted, pure, bright red or white frames, to Mica Levi’s gut-wrenching, cacophonous soundtrack, or scenes disruptively shot with night vision cameras.
Creating the Höss Home
Production designer Chris Oddy and his team built the Höss house and planted the garden from scratch. The set was constructed on location in Poland, directly next to Auschwitz, using detailed, historical records and photographs.
While domestic scenes of Höss and his family are routine, Johnnie Burn’s sound design is abrasive and incessant.
A continuous, low-pitched, industrial humming sound plays throughout. Its source is not shown, but it could be the sound of the crematoria. As the family go about their daily lives, there are distant screams, shouting, dogs barking and gunshots. The Höss family don’t seem to register them, apparently desensitized.
The film is formally rigorous. It makes use of the hidden cameras Glazer previously deployed in docu-fiction, Under the Skin (2013). With director of photography Łukasz Żal, Glazer positioned 10 fixed cameras and 20 microphones around the Höss house.
Żal has described how he and Glazer endeavored to use exclusively natural lighting, to avoid aestheticizing the unthinkable.
The Problem with Making Films About the Holocaust
The problem with making films about the Holocaust is that formal decisions inevitably also become ethical ones. The choices filmmakers make about camera movements, angles, lighting and editing have as much ethical significance as what is in front of the camera.
Audiences are familiar with (perhaps anesthetized to) the typical iconography of the Holocaust in film, such as watchtowers, barbed wire and smoke. In her comprehensive study of Holocaust film, Indelible Shadows (1983), historian Annette Insdorf describes these images as a kind of figurative shorthand: “the visual part representing the unimaginable whole.”
In The Zone of Interest, these recognizable images are disturbingly estranged. For example, from the perspective of the garden, a watchtower is visible in the background, partially obscured by pristine white sheets hanging on a washing line.
Instead of showing the operation of the crematoria as a closeup, the pulsating orange-red light from their chimneys illuminates Hedwig’s mother’s bedroom, disturbing her sleep. Children play in a pool in Hedwig’s cherished garden, atop of which is a shower, symbolically associated with the extermination of innocents in the camp.
Representing Gas Chambers on Film
Mainstream cinema relies on narratives of conflict and resolution, and so tends towards a binary notion of “good” and “evil.” The Zone of Interest, however, in Friedel’s disquieting, cold portrayal of Höss, shows that genocide is perpetrated not by “evil,” but by administrators — concerned with numbers, timetables and blueprints.
Höss is shown meeting with several men in suits, matter-of-factly explaining detailed plans for more efficient gas chambers immediately after a scene where Hedwig gossips with her friends. Such a contrast underlines that the ordinariness of perpetration.
Critics have praised or condemned films about the Holocaust such as Son of Saul (2015), Schindler’s List (1994) or The Boy in the Striped Pyjamas (2008) for the techniques they use to represent gas chambers. Claude Lanzmann, director of Shoah (1985), once said that if direct footage of the act of genocide in the gas chamber were to exist, he would destroy it. Watching it would transgress the most profound moral taboo.
One of the starkest moments in The Zone of Interest is its representation of the gas chambers. Rudolf Höss is leaving a party for the Nazi leadership in Berlin. He looks at something off screen, down a dark corridor. The film cuts to a close-up of a small, white circle, surrounded by blackness — the spy hole on the door of a gas chamber.
This is followed by a disorienting jump to the present day. Cleaners dust the piles of victims’ suitcases, shoes and other personal belongings in the Auschwitz Memorial Museum.
Glazer was stunned to discover that one of the walls of the Höss family garden directly bordered the extermination camp. He has said that the concept of his film “became about that wall.”
Glazer’s film underscores how walls, borders or geographic distance “compartmentalize,” in producer James Wilson’s words, the suffering of others. Such visual barriers become the mechanism through which genocide has been — and continues to be — enabled.
The Zone of Interest is unsettling because of how it portrays perpetration of the Holocaust as normal, therefore leaving the viewer with the disquieting notion that anyone, in the right conditions, can be responsible for unspeakable things.
Director Jonathan Glazer pursued an immersive naturalism in “The Zone of Interest” by removing the artifice and conventions of filmmaking.
February 15, 2024
What’s Driving the AI-Era of Audience Engagement Join Us March 28 for an NABiQ Live Learning Session
TL;DR
NABiQ Deep Dive is a series of live, virtual challenges and workshops leading up to the dynamic in-person innovation sprint and creative networking event in Las Vegas.
Aiming to push the boundaries of how we connect with and captivate audiences in the future, NABiQ Deep Dive will finish strong with the Drive the AI-Era of Audience Engagementworkshop, scheduled for March 28.
Facilitated by innovation consultant, certified design sprint master and startup coach Maria Halse Duloquin, this 60-minute online knowledge exchange will explore strategies for leveraging generative AI to better engage with audiences in 2024 and beyond.
Wondering what’s next in the world of intelligent interactivity? Considering how to effectively capitalize on your existing community? Join NAB Amplify (and your colleagues!) on March 28 for NABiQ Deep Dive: Drive the AI-Era of Audience Engagement.
This workshop will explore: How might we harness the full potential of Gen AI and intelligent technologies to drive a new era of audience engagement and interaction in advertising? What’s important for viewers’ brand relationships in the future?
From creating personalized and smarter advertising campaigns to experimenting with cutting-edge tools, consider how Gen AI will help evolve the way we interact with audiences and communities.
This year, NABiQ is breaking new ground by introducing live challenges on NAB Amplify leading up to the in-person event in Las Vegas. The digital, three-part series of Deep Dives topics revolve around the future of AI including content creation, content delivery and audience engagement and interaction.
Insights from these Deep Dive digital workshops will inform the on-site NABiQ brainstorm sessions at NAB Show, running April 13-17 in Las Vegas.
NABiQ stands out as a unique education and networking opportunity to collaborate, share ideas and overcome industry challenges. Structured like a hackathon, in-person participants form small groups, tackling specific challenges and presenting their innovative solutions for challenges around the three show pillars, Create, Connect and Capitalize.
The AI Broadcast TV and AI VFX & Motion workshops will also be offered at the 2024 NAB Show in Las Vegas, available to NAB Show registrants for $299. Both three-hour sessions are scheduled for Wednesday, April 17 at 2 p.m. (PT) and in the South Hall of the Las Vegas Convention Center.
NAB Show and FMC also offer a corresponding certification for each course of study. Certification exams cost $149 and are scheduled for 45 minutes. Those who are certified will then be added to a published database of AI professionals.
“NAB Show is committed to empowering broadcasters and professionals across the full range of creative fields with the knowledge to integrate AI into their work, in a way that ensures they are able to remain current and innovative,” said NAB Global Connections and Events EVP and Managing Director Chris Brown.
Dive into the trends shaping the creator economy in 2024 with insights from Inside the Creator Economy editor and publisher Jim Louderback. Watch the full conversation below and/or read on to discover what changes are ahead for creators.
Louderback is confident that “short form [content] is not going anywhere,” despite some articles proclaiming a return to longform media. “People will be consuming a lot of it in those gaps of time between when they’re living their real life,” he says.
On the subject of content formats, Louderback says video podcasts are also on the upswing: “We’re seeing that video podcasts, whether it’s on Spotify or on YouTube, are actually outperforming audio podcasts.”
Another trend that’s accelerating “is the move to live content,” per Louderback. He points to several reasons behind the trend.
“We see TikTok and others really leaning into live shopping, making live streaming super helpful. There are great ways to monetize yourself, if you can build a live audience on Twitch on TikTok, and on other platforms,” Louderback explains.
It’s also a reaction to one of 2024’s super trends: AI. “There are a lot of problems out there with AI deepfakes and not knowing if something is real or not real. That’s going to ratchet up this year,” Louderback says.
In contrast, live content is a reassurance to your audience that your content is real. Your mistakes prove that you’re “not an AI construct because… AI doesn’t really mess up like that.”
(In-person events, in addition to streaming content, are also predicted to benefit from this deepfake backlash.)
Monetization Strategies
“Creators are waking up to the fact that they cannot rely on platform-only revenue to make a living,” Louderback says.
Frankly, “A lot of the revenue that creators counted on in years past has gone away. So Facebook [is] not sharing as much revenue; TikTok [is] cutting back on funds and doing other strange things. And just a lot of revenue sources are going away.”
So in 2024, he says, “It’s all about building your own revenue mixes and doing yield management to figure out ‘How do I take people and think about the platforms as more top-of-funnel and awareness and move my biggest fans over to my own platforms?’ Whether it’s a community, a course, merchandise, or other ways for them to pay you directly.”
A common spin off of this is creators “launching their own products,” which Louderback notes has come with its own set of headaches (not to mention unsold inventory). He emphasizes considering “What are you best at as a creator?” before moving into a different model.
Content Distro
Windowing has come to the creator economy!
Louderback points to the recent kerfuffle over MrBeast’s X content experiment (repurposing existing YouTube videos to the Elon Musk-owned social media site, FKA Twitter) as an example of windowing.
“You may not see the same success on X” by repeating MrBeast’s trial, he says. “But there are other ways to do this sort of windowing of content, including putting them all together in a single, like, 20-, 50-, 100- hour stream and creating a live stream platform on Roku on Pluto or others.”
Additionally, Louderback points out, “we’re starting to see new AI tools that let you take, let’s say, that 20-minute long form that you did, and cut it up into shorter form videos.”
Not only do these tools help you make a quick cut, some can even reformat your content so that it appears in the appropriate dimensions for the destination platform. AI can get creators quickly to “50, 60, 70% of the way” there in these expedited workflows.
The Rise of the Global Creator
Speaking of transforming one piece of content into many: Translation and dubbing services have also been revolutionized by generative AI, Louderback says. In 2024, he says, “If you’re creating great content, and you dub it the right way, and get it out there, you can reach a global audience.”
We’re going to see “the rise of the global creator,” he predicts.
“Using AI and other things, you now can take the content that you do in one language, dub it in another language, it will automatically use your voice speaking that language,” he explains. “And in some ways, it will actually change your face so that it makes it look like your face is speaking French instead of English.”
For example, he says, “YouTube now allows you to have a single channel with multiple voice versions, and it’ll serve [the appropriate] voice version to the native speakers.”
Understanding the Creator Life Cycle
Content creators face the same challenge that has dogged professional athletes forever: You’re not going to do this job forever.
“Creators have a life cycle,” Louderback says. “Short form creators, maybe one to three years; longer form creators, five to seven years. What do you do after that?”
Louderback predicts: “We’re going to hear a lot more about life after creating.”
No matter how long you stayed in the creator game, Louderback says, “You’ve just gone to the University of YouTube or the College of TikTok. And there are a lot of opportunities for creators who no longer have those big audiences, but can use that expertise, working in brands, working at events, doing other things.”
And by the way — creators are starting to determine their own course of study long before the senior thesis.
“Creators realize they are more and more in charge,” Louderback says. “And they’re the ones saying ‘no’ to brands that don’t fit their audience because they’ve realized their audience is more important than the brand dollars.”
Part of what’s driving this change is that “social video platforms have become a mature industry,” according to Louderback.
Of course, maturity doesn’t necessarily equate to stability for creators. These platforms, Louderbacks says, are “doing things that mature businesses do. They’re stealing creators from other platforms. They’re shoving a bunch of different features out there.” The companies are experimenting constantly to retain audiences or, ideally, to incrementally grow their market share.
Louderback points out that YouTube and TikTok’s forays into the world of television are great examples of this.
A Controversial Prediction About AI
Louderback is not convinced that we can rely on AI getting exponentially better, at least not forever. He wonders if 2024 is going to be the year of “garbage in, garbage out,” in programmer parlance.
He says, “AI is going to start feeding on itself” because “tier one sources” (ahem New York Times, Wall Street Journal, etc) will “have been shut off” via legal and societal pushback.
If this does indeed come to pass, Louderback says, “The quality of AI, particularly in writing and creating content, will go down because … it’s gonna start training on all those crap AI articles that are flooding the internet.”
Those attending this spring’s NAB Show will have the opportunity to dive into these trends and much more at Creator Lab. (Louderback serves as one of content curators and program producers, along with Robin Raskin.)
Louderback emphasizes that this is not another meet-and-greet. Creator Lab is focused on developing “infrastructure for creators” and treats creators like “direct-to-consumer CEOs trying to build a business,” he says.
A big part of that infrastructure is literal. Louderback says Creator Lab will feature “the studio and the technology and the gear to actually create great stuff” and point attendees to “do it so that you can compete with other people or not waste a lot of money or time.”
But then there’s also the metaphysical system surrounding content creation. Louderback says they’ll cover: “How do you find the resources? How do you use new tools to upskill and uplevel what you do? How do you find and build the right support teams?”
Creators may be “the CEO of their small business or a large business, direct-to-consumer enterprise. But they need help. They can’t do it all. So how do you find that, too?” Louderback and Creator Lab will provide the resources creators need at NAB Show 2024.
The 2024 NAB Show will feature Creator Lab, a new show floor experience helmed by Jim Louderback and Robin Raskin.
February 29, 2024
Posted
February 13, 2024
Navigating the Creator Economy: The Rise of Micro and Nano-Influencers
TL;DR
Currently valued at $250 billion, the creator economy is booming, fueled by the $21.1 billion influencer marketing industry.
Aspire’s sixth annual benchmark report, “The State of Influencer Marketing 2024,” identifies micro and nano-influencers as one of the biggest trends to watch this year.
Micro and nano-influencers are emerging as key players due to their high engagement rates and authenticity, with brands shifting away from celebrity endorsements.
The rise of “lo-fi” content highlights the demand for genuine, relatable influencer-generated content.
Effective strategies include affiliate marketing, leveraging influencer-generated content, and fostering long-term relationships.
It’s no secret that the creator economy is booming. Goldman Sachs currently values the burgeoning creator economy at $250 billion, and predicts that it could reach as high as $480 billion by 2027. Fueling that explosive growth is the influencer marketing industry, assessed to be worth $21.1 billion by Influencer Marketing Hub, which reported a 29% increase from $16.4 billion in the previous year in its “State of Influencer Marketing 2023: Benchmark Report.”
“Influencer marketing shows no signs of slowing down in 2024 as it continues to prove its worth to brands around the globe,” Adam Rossow trumpets at Adweek. “It’s engaging, it works and is cost-effective compared to other channels.”
Marketing strategy agency Aspire’s sixth annual benchmark report, “The State of Influencer Marketing 2024,” pegs micro and nano-influencers as one of the biggest trends to watch in influencer marketing this year. According to the report, nano-influencers consistently achieve the highest engagement across all platforms, at an average engagement rate of 4.4%.
“Evolving social media algorithms have changed the influencer marketing landscape so that anyone has the power to ‘influence,’ whether they’re a social media megastar with 10 million followers on Instagram or a loyal customer with 10 close friends,” finds the report, which uses statistics from the company’s internal data, as well as survey results from more than 700 marketers and creators.
The Evolving Landscape of Influencer Marketing
Influencer marketing has traditionally been dominated by macro-influencers and celebrities, whose vast followings promised wide reach. However, the landscape is shifting towards micro (1,000 to 100,000 followers) and nano-influencers (less than 1,000 followers), driven by a demand for authenticity and relatability.
Brands are catching on, turning away from traditional celebrity endorsements towards more authentic, relatable content creators. This pivot has put the spotlight on micro and nano-influencers, whose smaller but highly engaged followings offer unique value to brands.
“Savvy brands are poised to significantly ramp up their use of highly targeted, hyperlocal campaigns, along with a focus on niche influencers or nano-influencers,” Danielle Wiley predicts at Forbes.
This shift is not merely a trend but a response to the evolving expectations of consumers who crave genuine connections and content that resonates on a personal level. According to Dmitrii Khasanov at Entrepreneur, nano-influencers excel in fostering these genuine connections within tight-knit communities, while micro-influencers are seen as niche experts, captivating dedicated audiences with their specialized focus. This expertise and authenticity translate into higher engagement rates, making micro and nano-influencers incredibly effective partners for brands looking to tap into specific markets.
“The key to forging authentic connections with consumers lies in partnering with content creators who deliver visibility, genuine endorsement and effective advocacy,” Khasanov writes. “Marketers have shifted their focus, acknowledging that the quality of engagement trumps the sheer size of an influencer’s following.”
Aspire’s benchmark report highlights another critical aspect of this shift: the role of influencer marketing in driving efficient growth. As brands navigate rising ad costs and tighter budgets, influencer marketing emerges as a key strategy for optimizing marketing spend and improving ROI while reducing customer acquisition costs.
“Ninety-three percent of creators are willing to work with brands for just free products,” Magda Houalla, Aspire’s director of marketing strategy, emphasized during a webinar touting the company’s new report, up 13% over last year.
“This, of course, is really exciting for brands, because it shows that there is still a world where gifting makes sense,” Houalla said. “And I think it’s also really exciting for up and coming creators. Because there are a lot of creators that are looking to get their foot in the door, they want to start creating content on behalf of brands. And so long as they love the brand, and they have a product that they’re excited about, they are more than willing to work with you.”
The Rise of Lo-Fi Content
Micro and nano-influencers offer several advantages, including higher engagement rates, targeted reach within specific niches, and a perceived authenticity that resonates with today’s consumers. Furthermore, the preference for authentic, relatable content, as indicated by the rise of “lo-fi” content — photos and short-form videos with a DIY quality and little-to-no editing — supports the effectiveness of micro and nano-influencers.
“The content landscape is changing. Today’s audiences aren’t looking for content that is overproduced,” the report says. “Instead, we’re seeing a return to the scrappy, low-budget media of influencer marketing’s earliest days. But this time, it’s on purpose.”
To harness the power of lo-fi content, says Aspire, it’s vital to take a video-first approach to content. “Video content has continuously risen in popularity over the last few years — with no signs of slowing down. According to our data, 40% of marketers are already saying short-form video has the highest return on investment.”
Working with micro-influencers is a key strategy for marketers searching for lo-fi content, says Aspire. “Micro-influencers (creators with between 10,000 to 60,000 followers) are most likely to create the most authentic and relatable content, as many still work a nine to five or are in school, and make content as a side hustle.”
Finally, for their campaigns to be successful, brands need to set guardrails, not guidelines, for working with micro-influencers. “Consumers are hyper aware of sponsored content that is inauthentic. Although it’s important to provide some direction and inspiration, give creators enough creative freedom to produce content that aligns with their personal brand. After all, creators know what will resonate with their audience the best.”
Integrating Micro and Nano-Influencers into Digital Strategies
The emphasis on authenticity and the pursuit of long-term collaborations are particularly pertinent when working with micro and nano-influencers, Wiley notes at Forbes. Their close relationships with audiences demand that partnerships are authentic and built on trust and mutual respect.
“Instead of trying to appeal to everyone everywhere, these strategies are about connecting deeply with specific communities and interests. It’s like having a conversation with a neighbor rather than broadcasting to a crowd — more personal, more relevant and, often, much more effective.”
For brands looking to tap into the power of micro and nano-influencers, Aspire has actionable advice on integrating these content creators into their marketing mix. Ensuring authenticity, aligning influencer partnerships with brand values, and fostering long-term relationships with influencers are key considerations.
Incorporating micro and nano-influencers into affiliate marketing campaigns can significantly enhance a brand’s reach and revenue, allowing brands to engage with niche audiences in a cost-effective manner. This strategy leverages the influencers’ high engagement rates to drive sales, compensating influencers based on conversions.
Repurposing content created by these influencers across various marketing channels can optimize ad performance and enhance content authenticity. This approach not only improves ad effectiveness but also aligns with consumers’ preferences for genuine, relatable content.
Transitioning from one-off campaigns to long-term partnerships with influencers fosters a sense of loyalty and authenticity. Developing deeper connections with influencers and their audiences over time can transform successful collaborators into brand ambassadors.
And, of course, investing in influencer marketing platforms like Aspire can help brands efficiently scale their influencer marketing efforts. These tools streamline the process of managing influencer campaigns, from discovery and vetting to content creation and analytics, especially when working with a diverse array of micro and nano-influencers.
By embracing these strategies, brands can effectively integrate micro and nano-influencers into their digital marketing mix. This shift towards more authentic, community-focused marketing approaches highlights the evolving nature of digital strategies and the growing significance of influencer marketing in the contemporary digital landscape.
10 THINGS ABOUT INFLUENCER MARKETING YOU NEED TO KNOW IN 2024
In a blog post, Aspire lists the 10 most noteworthy influencer marketing statistics from their latest industry benchmark report, “The State of Influencer Marketing 2024,” which is now in its sixth year:
69% of marketers are planning to increase their influencer marketing budget in 2024.
90% of brands plan to increase their presence on Instagram in 2024.
TikTok is the most popular channel for creators in 2024.
YouTubers achieve a whopping 49.5% engagement rate.
93% of creators are willing to work with brands just for free products.
Nano-influencers consistently achieve the highest engagement across all platforms at an average engagement rate of 4.39%.
64% of brands are working with smaller creators.
57% of creators have maintained the same rates in the last 12 months.
63% of marketers say influencer-generated content performs better than other brand-directed content.
94% of brands plan to invest more into video content in 2024.
These Are the Entertainment Industry Jobs That’ll Be Impacted by AI (Yes, Some for the Wrong Reasons)
TL;DR
CVL Economics surveyed 300 leaders in the entertainment industry about generative AI to investigate how uptake of the technology will likely affect M&E jobs in the near term.
A number of job functions are projected to be especially vulnerable: sound designers, 3D modelers and foreign language dubbers. Also at high risk: Those seeking entry level positions and contract work. Writers and vocal/music performers are expected to fare better, at least in the near term.
However, the associations that commissioned the study intend to use the information to fight back against the negative impacts of Gen AI uptake during their upcoming contract negotiations.
Do you think your job is safe in the age of Gen AI? I have some not-so-great news for you: Your boss’s boss probably doesn’t agree.
In Q4 2023, the consultancy surveyed 300 entertainment business leaders to assess how generative AI will likely affect the M&E workforce.
Respondents represented six sectors: the film, television, animation, music and sound recording and gaming industries. About two-thirds of those questioned agreed implementation of Gen AI is likely to “play a role in consolidating or replacing existing job titles,” according to the study.
Generative AI has come to the forefront of the public imagination “[a]t a time when several entertainment industries are facing challenges,” the report notes, adding that “the desire to increase productivity, cut costs, and identify new revenue streams will be top of mind.”
But the good news is that a majority of respondents said, “GenAI has already led to the creation of new job titles and roles in their organization and anticorporate GenAI technology will be responsible for the creation of new job opportunities.” (But that’s no guarantee that the scales will ultimately balance for workers in any industry.)
For context: “[T]he pace of AI integration into creative job roles is increasing at a rapid clip; between 2020 and 2022, for example, the number of job postings that listed the ability to use artificial intelligence tools as a desired skill increased by 122%,” The Hollywood Reporter’s Winston Cho points out in his coverage of the report.
CVL writes: “[C]reative workers will be facing an era of disruption, defined by the consolidation of some job roles, the replacement of existing job roles with new ones, and the elimination of many jobs entirely.
But We’re Not Taking This Lying Down
Disney Animation Technical Director Brandon Jarratt, who serves on the Animation Guild’s executive board and AI task force, told the LA Times’ Christi Carras that Local 839 will use the study to inform strategy and goals for upcoming negotiations.
The Animation Guild is not the only M&E union likely to follow the blueprint laid out by the WGA and SAG. CVL notes 8% of jobs in the arts, design, and entertainment have representation and collective bargaining. That figure is higher when you narrow it down to the motion picture and sound recording industries (17%) and broadcasting (11%).
CVL concluded that “[c]reative industry leaders are largely embracing GenAI technology, and most recognize that operational benefits in the future will come at a cost to many creative workers.”
The Animation Guild, Jarratt told Carras, aims to “help set the industry standard for what kind of tools are appropriate and … going to help artists, and which ones are going to hurt them and hurt their livelihoods.”
After all, “[t]he tool itself is almost never the issue,” Jarratt explained to Carras. “The studios are always looking for ways to spend less money. And if they feel like they’re going to be able to cut budgets in order to meet shareholder projections or whatever, then they’re going to try to exploit that in any way they can — and that’s where the fear comes from.”
Lest you think that fear is overblown: In November, DreamWorks founder Jeffrey Katzenberg predicted that Gen AI will replace up to 90% of film animation jobs, THR’s Cho notes.
“We’re seeing a lot of role consolidation and reduction,” Concept Art Assn. co-founder Nicole Hendrix told Cho. “A lot of people are out of work right now.”
As a whole, the media and entertainment industry seems more inclined to embrace generative AI — and more likely to be early adopters. CVL reports that 25% said they are currently using some kind of Gen AI, and nearly half (47%) indicated they are planning to implement generative AI soon, in some fashion.
CVL calculated that Gen AI is likely to disrupt “approximately 203,800 payroll jobs” in the US.
In the parlance of the study, a “job is considered disrupted when a significant amount of tasks within that role are consolidated, replaced or eliminated as a result of AI,” per Carras.
She writes, “In an industry already brought to its knees by the COVID-19 pandemic, overspending during the streaming wars, overlapping labor disputes and mass layoff-inducing corporate mergers, AI is just one more wrench to worry about for entertainment workers.”
“Among the top tasks flagged as likely to be impacted by AI: creating realistic sound design for film, TV or games; developing 3D assets; and creating realistic sounding foreign-language dubbing. The tasks least likely to be affected include writing film, TV or game scripts, as well as performing music or vocals,” Cho summarized.
Note that more than half of those expected to be affected reside in a handful of U.S. states: California, New York, Georgia and Washington (places known for their connections to film, television and gaming).
They also point out that gig workers and freelancers are likely to be hit hard by the incorporation of AI tools; they concede that “change may not be systematically understood or visible beyond anecdotal data” for these types of workers.
Still, it’s worth pointing out, since the report estimates that 29% of U.S. arts, design, entertainment and media workers are self-employed or operate in a similarly independent fashion, more than four times the national average for all major occupation sectors.
CVL tried to suss out the impact by comparing the firms that lean heavily on freelancers with their attitudes toward early Gen AI adoption, and there was an overlap for nearly 8 in 10 companies.
Also likely to be on the Gen AI chopping block are entry level positions, which will have downstream effects on the talent pool. The report frets that this in turn will limit networking opportunities and the acquisition of “domain knowledge,” which are cumulative.
“When you’re looking at any technology that’s essentially replacing [or consolidating] a junior or entry-level role … it is harming the ecosystem,” The Concept Art Assn.’s Hendrix told Carras. “What does that mean if nobody’s really entering in and the bar is now this immovable wall?”
And unless firms are especially careful, DEI initiatives are likely to suffer, CVL cautions.
“Aspiring workers from less affluent and underrepresented backgrounds have historically leveraged these entry-level roles as a pathway into the entertainment industries and to higher-paying positions,” Cho writes.
However, all of these outcomes are not inevitable (except, perhaps the integration of generative AI into nearly every workflow in the same way that the Internet has permeated our lives). Understanding trend lines is not synonymous with seeing an unchangeable future. Unions and regulatory bodies will have a say in how this plays out, in addition to corporations and creatives.
As Cho puts it: “The future is not yet written, and it needn’t be generated by AI.”
Amid fears over the use of generative AI in Hollywood, artificial intelligence seems less likely to replace humans than to assist them.
February 9, 2024
The Visual Poetry in “All Dirt Roads Taste of Salt”
TL;DR
Shooting on celluloid and creating tactile imagery was key for cinematographer Jomo Fray with director Raven Jackson’s debut feature “All Dirt Roads Taste of Salt.”
Frey lensed a love letter to the last three generations of women in Jackson’s family with the aim to create evocative visuals in which the viewer would not only see the story, but feel it as well.
Much like the “Dogme 95” manifesto, the DP and director wrote a visual manifesto they read to each other every day during production that included stripping back the paraphernalia of on-set distractions.
Director Raven Jackson and cinematographer Jomo Fray use the language and techniques of poetry to create her debut feature All Dirt Roads Taste of Salt.
The mediative experimental drama explores the influential people, places, and events Mack (played by Kaylee Nichole and Zainab Jan as younger and older versions of the character) has encountered while growing up in Mississippi.
A mantra for Fray (whose credits include Tayarisha Poe’s 2019 feature Selah and the Spades and Jackson’s 2018 short Nettles), is to shoot the way a director dreams.
The film has no set narrative structure. “It was about the texture of that emotion rather than covering it more traditionally,” Fray said. “It’s about building a scene that has specific coverage knowing that those scenes could be put into a different order but that’s why you find motifs naturally as you’re shooting.”
Fray elaborated on finding the visual language for the film to Bale, saying that they wanted to make cinema more sensorial.
“The conversations we had early on were like, “Can we smell this image? Can I feel this image? In this image, I want to literally feel the salt on my brow from the sweat of being in the sun so long. How do we conjure that? how we can create more sensorial feelings and textures in every moment, every image, every gesture, every detail.”
To push themselves to adhere to this aesthetic they drew up a 12-point visual manifesto inspired by the work of director Terrence Malick, which they read to one another at the start of every single day.
One of the points on the manifesto was “to be present to the cinema on set.”
“Raven would direct the action and we would find the scene, and we would watch it together in rehearsal and start to see the small gestures that are built into the rehearsal that we just find our eyes drawn to,” Fray said when interviewed by Stephen Saito for The Moveable Fest.
Another manifesto point was to speak in “slant rhymes,” which was a phrase they used to remind themselves that it was okay to be inspired by the same thing multiple times in multiple scenes.
“It was our attempt to try to create these motifs in the film, but to create them in a naturalistic way, so we didn’t go in thinking, “We want to shoot a lot of hands,” but we would find ourselves being inspired by what a character’s hands were doing at a certain moment in the scene, and we didn’t try to stop ourselves to have visual diversity with the film.”
The manifesto also required them to be elemental which meant being emotionally open day-to-day, scene-to-scene, moment by moment on set than was necessary.
“We chose tools so that the camera and the lighting can get out of the way a bit. Raven never really wanted a lot on set and didn’t want any artifice when we could avoid it, so my gaffer Jay Warrior and my key grip Forrest Penny Brown created a lot of our lighting from outside in, a lot of the time was using mirrors, to create that feeling.”
Inevitably, this meant the film was shot on 35mm. They did a lot of testing for different ways to process the film as well as with different perforations mixed with different lenses.
“So much of this movie has to do with interiority, so we wanted a tool that could really make us be inside their emotions, and have it be incredibly textural, so that you feel the coarseness of the image between your fingers,” Fray told Bale.
They selected 500T 5219 Kodak stock, making a push process for the entire movie.
“We only used one stock, even though this movie takes place in different time periods,” Fray explained, “For Raven and me, these are not flashbacks or flash-forwards. Every single moment, every single frame in this movie, is about Mack dealing with the present-tense stakes of her life at that given moment.”
The A24 release has received multiple nominations on the festival circuit and won several, including most recently for Best Cinematography at the Black Reel Awards.
He described the visual style of All Dirt Roads Taste of Salt as less about setting out to make poetic imagery, and more about “trying to create a process where poetry could find us.
“I don’t necessarily want the image to be unlocked in its meaning. I want it to be a metaphor,” he told Cioffi. “So that the viewer is actually grafting their histories, their loves and fears onto the image and the image is welcoming them to engage with it. It is an invitation for the viewer to put their histories on it, and in that way, hopefully make the images more robust and also make the image one where a bunch of different people can all have different interpretations.
“All I really want is for the audience to be active while they’re watching the movie, not passive.”
Studio Tours Amplified: Inside Coffeezilla’s “$10 Million” Virtual Production Setup
TL;DR
Stephen Findeisen, aka Coffeezilla, is a YouTube content creator with more than three million subscribersknown for debunking online scams.
Coffeezilla’s content is set against a virtual backdrop of a cyberpunk, film noir-inspired city, created within a sophisticated virtual production environment humorously dubbed “the $10 million studio.”
Starting with no background in production or film, Coffeezilla was inspired by other YouTubers and a TED Talk by David Korins, leading Findeisento explore virtual production to create original content.
Coffeezilla combines practical lighting solutions, like Aputure’s gobo cutouts, with post-production visual effects to enhance the realism of his virtual sets.
Coffeezilla’s approach offers insights into the future of virtual production for indie creators, highlighting the potential for innovation within resource constraints.
Stephen Findeisen, known online as Coffeezilla, is a figure of intrigue and respect on YouTube. With a subscriber count north of three million, he’s renowned for debunking online scams, primarily within the crypto space. Yet, it’s not just his investigative prowess that sets him apart — it’s the virtual world he’s crafted for his content. This cyberpunk, film noir-inspired city, complete with a detective’s office, is where Coffeezilla’s stories come to life.
Findeisen generally records his videos in front of a green screen; his backgrounds feature elaborate computer graphics, and he inserts animated graphics to illustrate his content, including a recurring character of a robot bartender. This elaborate world is crafted inside what he has jokingly dubbed “the $10 million studio,” a high-tech virtual production environment run by a lean three-person team.
Findeisen recently sat down with VP Land’s Joey Daoud to recount his journey from a basic bedroom setup to a sophisticated production powerhouse, revealing tips and tricks and breaking down the virtual production studio he built for the Coffeezilla Cinematic Universe.
Go Big or Go Home
Embarking on YouTube with no background in production or film, Findeisen was driven by a desire to create something genuinely original that also informed his audience. Watching other YouTube channels like Linus Tech Tips and a TED Talk on set design by David Korins, who has designed sets for artists such as Kanye West, he was struck by the idea that one’s environment can shape their identity and creativity.
A breakthrough moment arrived when Findeisen realized he could create an infinite number of virtual production sets in his own bedroom. He began scouring the internet for information on virtual production, including behind-the-scenes videos for The Mandalorian. But while he was able to find plenty of details for big-budget productions employing virtual production techniques, there was very little guidance for indie productions.
“I remember thinking, really early on, I want to be one of the people who really pushes this landscape of indie virtual production forward in a practical way,” he tells Daoud. “I want to show that not only can it compete, it’s actually, I think, in a lot of cases more competitive, and people just sort of haven’t figured it out yet.”
Early experiments riffed off of tropes popularized by the internet scammers Findeisen featured to show off their fake wealth, such as placing a virtual Lamborghini inside his studio. “I just basically stole what they were doing and made it like a satire, like pastiche.” The idea of the $10 million studio started off as a joke, he says, until it wasn’t a joke anymore. “Because eventually, if you push your production enough, it is almost plausible that it’s a $10 million studio.”
Lights, Camera, Action
In the expansive realm of YouTube content creation, where the scale of production can range from solo vloggers to full-fledged studio teams, Coffeezilla has carved out a niche that defies the norm. At the heart of Findeisen’s operation is a lean three-person team — Findeisen himself, a 3D artist/animator, and a video editor.
Production time for a full show can range from a month to two months, depending on the complexity of the project and the amount of work done at the desk versus in-depth investigations. The production process is meticulous. It begins with research and script compilation, followed by storyboarding and shooting. “We’ll research stuff, then I’ll start to compile it in a script,” he describes. Post-production involves editing, sound design, and final touches. “We start to shoot it, then we send that to my editor… then we’ll send it for sound design and final editing.”
For capture, Findeisen opted for the Sony FX3 and FX9 full-frame cameras, augmented with the ARRI Alexa. Practical lighting solutions, such as Aputure‘s gobo cutouts, are employed to achieve specific visual effects, like Coffeezilla’s signature noir window blind lighting.
Open-source 3D software Blender serves as the backbone of Coffeezilla’s virtual set design. “I’ve just kind of always had a bad experience with Unreal, it never really runs fully real time on my computer,” he explains. “So I’m like, if we’re going to just have to do it in post, let’s just do it in Blender, because I think the renders out of Blender are slightly better.”
ICVFX vs Post-Production
Visual effects, such as the steam coming from Coffeezilla’s coffee mug, are also added in post. “I thought of so many ways to do that practically,” Findeisen laughs, recounting his process of trial and error with everything from fog machines to a boiling pot of wat placed underneath the desk. “If you if you like problem solving, virtual production is for you. Because it’s constant problem solving.”
All of Coffeezilla’s shots where he’s sitting at his desk speaking to the camera are final pixel, using Blender video renders of the room and composited in real time in Aximmetry, a node-based video editing solution for live workflows that allows the integration of live-action footage with virtual environments. “You basically pull in a camera input, but then you can apply a node layer,” he says, describing how camera correction, blur, color keys and more can be added to live footage in real time.
The more advanced VFX shots, such as conversations with his robot bartender or whenever the camera is moving, are composited later in post-production. Findeisen initially tried match moving, which tracks camera movement through a shot so it can be reproduced in 3D space, but found it to be tedious and repetitive.
“One thing with me is I’m always trying to cheat work,” he says with a laugh, describing how he’ll work all night just to avoid ever having to do the same thing twice. “Like, I’m always trying to make things faster. But then it ends up making me do more work.”
In the end, camera tracking, which analyzes data tracking the camera in physical space, provided the answer. Findeisen tested a number of tools, ultimately deciding on Mo-sys StarTracker, which he says delivers the closest thing to final pixel that he’s been able to find.
“It’s more about like figuring out how to solve for camera lens distortion on different lenses and like, you know, focus breathing, the things that you take for granted, or you don’t even realize are problems when you’re just trying to solve that tracking quality part.”
Findeisen also quickly learned the importance of practical elements in virtual sets. “Anything you touch on set should be practical. It’s so exhausting trying to manually track whatever stuff you touch, because you’re going to create all these shadows. It’s going to be a nightmare,” he says.
“Just do yourself a favor. Floors, try to make them practical. Things you touch, like chairs, or, like, if you’re touching a table, you know, please just make a practical cup, make it practical, these things are supposed to help you. And I think if you get too overly zealous about ‘I’m 100% virtual production,’ you’re going to be like me and trying to key out a green table where your arm’s touching it,” he cautions.
The Future of Indie Virtual Production
Findeisen also warns that indie virtual production for moving cameras may not be accessible to the layman YouTuber. “I’ve developed my pipeline as my resources have expanded,” he explains. “But I know when I started on YouTube, I could have never afforded the tools I needed for moving a camera. There’s just so much you need to get a working workflow for moving camera virtual production, that I think it’s worth being really honest about that,” he says.
It’s not just about the money, however; it’s also about time. “You can do a lot with a basic green screen, some basic lights and a camera. But as far as, like, trying to get into the moving camera shots, unless you’re willing to spend some time I think match moving is maybe the closest thing to a cheap workflow for that. But even then, it’s hard.”
Findeisen says that he’s come to appreciate green screen as a tool for simplifying workflows. “James Cameron, I think, is famous for saying, ‘on green screen, things that are hard in real life are easy. And things that are easy in real life are hard.’
“I think that’s really true. If you want some epic sweep of the Himalayan Mountains, you could build it on Fiverr with a CG artist for five bucks in, like, five minutes. Or you could hire a helicopter to go out there and shoot it for real on an ARRI Alexa. So they’re very two different things in scope. But if you want to have a person open a door and walk through it, fully in CG, it’s actually pretty hard. And if you want to do that, practically, it’s so easy,” he explains.
“Nobody comes to my videos thinking, ‘Oh, I’m gonna see virtual production today.’ It’s just this invisible tool that just helps things look a little bit better, helps the narrative flow a bit better. And I’m happy to have that in the background.”
Was It All a Dream? Developing the Visuals for “All of Us Strangers”
TL;DR
Cinematographer Jamie Ramsay calls his collaboration with gaffer Warren Ewing and Company 3 colorist Joseph Bicknell “the holy trinity”of his process for developing the look of “All of Us Strangers.”
Based on a novel by Taichi Yamada, The Golden Globe and BAFTA-nominated film is directed by Andrew Haigh and features beautifully appointed production design by Sarah Finlay.
Ramsay and his team created a single overarching look for the two worlds depicted in the film, separated only subtly by the differences between fabrics and dyes prevalent in each era.
Within that framework, Ramsay’s photography was motivated by the emotional content of each scene, not by opposing looks designed to delineate the different environments.
Although shot on film, it was still vitally important to develop a LUT to guide the dailies grade so the entire production team could see how materials, skin tones and lighting would ultimately look.
The highly praised Golden Globe and multiple BAFTA-nominated feature All of Us Strangers takes its lead character through some very odd situations that could have been presented in the form of a ghost story but are arguably far more impactful as the intimate drama it is. In the film, we meet Adam (Andrew Scott), a writer who has the opportunity to better understand his fears and the reasons for his loneliness when he visits his long-deceased parents (Claire Foy and Jamie Bell) and has a chance encounter with a handsome but mysterious stranger (Paul Mescal).
In addition to Andrew Haigh’s direction and script (based on a novel by Taichi Yamada), the fine acting and Sarah Finlay’s beautifully appointed production design, the film’s unusual approach succeeds in large part because of the work of cinematographer Jamie Ramsay — work that began long before shooting commenced.
A key to the film’s overall visual tone concerns the look of Adam’s contemporary London apartment, which suggests a somewhat luxurious space with a lovely view of the city but a rather sterile, lonely space, versus his parents’ suburban home, which evokes the era of the 1980s, when he’d last seen them, and feels a bit more inviting.
Early on, the question arose of whether the cinematography would offer a clear delineation between these two worlds, with the past looking warm or hazy perhaps and his modern world colder? No, Ramsay says. The overarching look would be similar between the two worlds, separated only subtly by the differences between fabrics and dyes prevalent in each era. Within that framework, Ramsay’s photography would be motivated by the emotional content of each scene, not by opposing looks designed to delineate the different environments.
“Once I’ve had these types of conversations with the director and production designer,” he says, he starts to build a look book of “color treatments, images, textures, tones and thoughts that encapsulate moments and anchor points in the story and how colors should shift and by how much.”
Then, Ramsay starts the process of developing an approach for the cinematography itself and this is when discussions expand to include gaffer Warren Ewing, who will oversee the type, intensity and placement of lighting units based on Ramsay’s ideas, and colorist Joseph Bicknell of Company 3, who would work with the cinematographer to develop a show LUT which reflects the story’s concepts and provides a visual representation to everyone involved in shooting editing the film, how color and contrast will be rendered in the final film.
This threesome comprises what Ramsay refers to as “the holy trinity” of his process. “Color, light and texture and the mood and emotional and intellectual context created by color is so important to a film,” he explains, “and the relationship between my colorist, my gaffer and myself is key to how we bring all that to life.”
Though Ramsay shot All of us Strangers on film, it was still vitally important to develop a LUT that would be used to guide the dailies grade so that everyone involved, from the director to production and costume designers and, of course Ewing, could all see how materials, skin tones and lighting would ultimately look.
“We wouldn’t see [the results of the LUT] on set as we would if we were shooting digitally,” Ramsay explains, “but it was important to do this work in advance so that everyone could get a good idea of what the LUT would do so we were all on the same page,” he says. “After the first week of shooting, one of the most important things we did was having a big-screen projection of some of the choice dailies.”
Creating the LUT involved shooting tests on various Eastman Kodak emulsions, which Ewan lit, and Bicknell graded. “We could [test] production design and wardrobe design and color palettes and various materials and tonal boards and really see the see where the colors go,” he says. He and Bicknell worked with the film scans at Company 3 to design the show LUT. “And from that, we can see where we will need to push and pull colors. Do we need to wash a color down a little bit or keep it where it is?”
Communication among the DP, gaffer and colorist from this early stage, Ramsay says, helps to ensure control over the imagery as Ewing and Bicknell compare notes on how “the quality and intensity of the light on set interplays with the handling in the grade of the contrast and the placement of blacks and highlights within the frame.”
So much of the film is ultimately about the gravity of the loss that Adam suffered when his parents were killed in an accident and much of what Ramsay endeavored to create was based on the idea of the character’s memories. Portions of the film were designed to feel “almost like a time capsule in the sense that Adam’s parents were locked in this point in time in the late ‘80s.”
This didn’t mean that the show LUT should bring an overarching ‘80s feel to everything, though. “We wanted to be able to evolve the color in such a way that there could be a growth to it,” the DP says. “So, for instance, in the current day costume and production design choices would include intense primary reds while the ‘80s reds would be more like a burnt orange. A primary green would be more of a pistachio. And so on. So, the relationship of color and time was dealt with on the set.”
When Ramsay and Bicknell worked on the final grade, a great deal of the look of the film had already been worked out in the well-thought-out interplay between the lighting, design and the show LUT. Says Bicknell, “I love working with Jamie as we have a lot of the creative conversations and world building in preproduction, and that allows us to be inspired by the changing emotion and tone of the film in the final DI, helping to advance the story with color.”
Quite a bit of suspension of disbelief is demanded from the audience, the DP says, “but we all believed that if we gave the audience the respect — if we gave them a tableau that felt honest and truthful and real — that would serve us, because it would just put the responsibility in their hands just to just stay with the flow.”
Filmmakers Andrew Haigh and Jonathan Alberts discuss how they blended supernatural elements into the deeply personal “All of Us Strangers.”
February 8, 2024
ENCO’s One-Stop Captioning Workflow Solution
Along with continued improvements to its market-leading enCaption solution, ENCO had added its first closed caption encoder born through a strategic acquisition.
ENCO continues to evolve into a one-stop captioning solutions provider for broadcasters and media companies. Along with continued enhancements to its AI-based enCaption5 automated live captioning and transcription solution, ENCO’s recent acquisition of DoCaption adds a closed caption encoder to ENCO’s powerful captioning workflow.
The new DoCaption Closed Caption Encoder uses DoCaption’s proven existing architecture, fanless design and redundant power supplies to provide broadcasters with a cost-effective closed captioning encoder with modular options to support regional captioning and subtitling standards. The 1RU captioning encoder inserts captions into live broadcasts from any live captioning source via existing captioning standards through an IP or serial connection. It can also integrate directly with ENCO’s own enCaption automated captioning system for a complete turnkey solution.
“The introduction of our new DoCaption hardware captioning encoder provides an immediate turnkey solution for customers that need to build or replace closed captioning infrastructure,” said Ken Frommert, President of ENCO. “In the bigger picture, this product represents the endless possibilities for developing new products, features and applications in collaboration with DoCaption’s engineering team.”
The DoCaption engineering team specializes in developing embedded, integrated hardware and software solutions. With all hardware developed in-house, ENCO and DoCaption engineers can freely develop complete solutions with future media workflow opportunities in mind. ENCO’s new DoCaption closed caption encoder is a prime example of how developers have built an integrated captioning workflow solution to encode and present ancillary data for North American captioning standards (CEA-608 and CTA-708, formerly EIA-708/CEA-708), European subtitling standards (EBUTeletext OP-42/OP-47/ST-2031) and South American captioning standards (ARIB B37). The same modular architecture can support additional ancillary data applications, including SCTE 104 metadata insertion and frame-accurate scoreboard data encoding/decoding.
ENCO has also revamped its HTML5 browser front-end user interface for enCaption, offering a responsive design that streamlines functionality for customers. The design includes enhancements such as Calendar View for scheduling captioning events in addition to the existing List View option.
ENCO has also updated the enCaption software platform to enable deployment on Linux operating systems. Deploying enCaption software on Linux provides users another option for those looking to standardize on a single operating system. Also deployable on Windows, enCaption5 can now offer flexible live or file-based captioning and transcription that can be deployed on-premises, in the cloud, or in a hybrid on-prem/cloud workflow. enCaption’s open API supports seamless integration with media asset management systems and third-party developers, even enabling white-label 3rd party services.
As of late Q4 2023, enCaption can also create transcripts and add closed or open captions to both live and pre-recorded content in 48 languages. Other recently added features include accuracy improvements of up to 20 percent and improvements to speaker detection for seamless captioning of program material with multiple speakers. Improved data formatting for phone numbers, measurements, websites, and email addresses also drives better caption readability.
Another recent interesting development is the addition of enTranslate software to enCaption. Now available fully on-premises or in the cloud, enTranslate combines advanced speech-to-text conversion, machine translation and grammatical structure analysis to deliver captions in multiple languages simultaneously. enTranslate includes unit conversion, such as metric to imperial measurements, with the conversions automatically performed based on source and target languages.
enTranslate can be used for all broadcast workflows, including TV, radio, OTT feeds, or offline. Users can embed translated captions in short and long-form VOD content, including podcasts. For live radio applications, enCaption creates speech data and sends it to ENCO’s enTranslate engine for translation. For more information or to schedule a demo, please visit https://www.enco.com/products/encaption and https://www.enco.com/products/docaption-encoder.
Reading Between the Lines: Gen Z Really Loves Closed Captions
TL;DR
More than half of Gen Z and millennial media consumers prefer subtitles, according to new survey results from YPulse and Preply.
While subtitles haven’t always been seen as a first choice, they’ve grown in ubiquity — especially with the rise of online videos that include automatic captioning.
Captions help watchers keep up with murmuring dialogue, distinguish thick accents and get a head start on a scene.
Closed captions aren’t just for the hearing impaired — the rise in its popularity is being driven by younger viewers who are in fact making the use of subtitles while watching television the norm.
In a new “TV and Entertainment report,” YPulse found that more than half of 13-39-year-olds prefer to use subtitles.
And it’s not just because they need them; the younger generation makes use of reading text while watching movies/TV to keep up with murmuring dialogue, to distinguish less familiar accents, and some say just to get a head start on a scene and go back to looking at their phone.
Per the report, 59% of Gen Z survey respondents and 52% of millennials said they use subtitles. 61% of Gen Z males say they prefer to use them.
These are no outliers. A 2023 report by Preply found Gen Z overwhelmingly the generation most likely to be turning on subtitles (70% of Gen Z respondents said so compared to 53% of Millennials, and just 35% of Baby Boomers).
As to why Gen Z likes to turn on text while watching their shows, part of it, according to Wilson Chapman at IndieWire, is that people in that generation grew up watching videos on social media, where subtitles are the algorithmically encouraged default.
Sara Fischer at Axios writes that TikTok helped normalize captions for young media consumers, who are now turning regularly to subtitles as part of their streaming habits.
“TikTok has an auto caption feature that a lot of content creators will use,” Axios reporter April Rubin told WGBH Morning Edition co-host Jeremy Siegel. “And so people are just a little bit more used to reading as they watch. Another factor that may play into this is that it has been a little tougher to maintain quality sound in the streaming era. So they could be watching subtitles just because they’re missing some of the dialogue with background noise or changing volumes.”
Younger kids actively need subtitles to enjoy the content they are watching, according to a Kids Industries survey of US and UK parents with kids 5-15 years old. In this case, subtitles add an increased dimension of understanding to viewing. Watching content with closed captions can reportedly improve literacy, vocabulary, and the speed of reading, the report said.
“For kids’ media brands, the widespread use of closed captions should be a sign to improve accuracy and make sure subtitles are available for all programs,” suggests YPulse.
But closed captions are being used more by all of us. A 2022 report by Netflix revealed that 40% of its global users use closed captions on all the time, while 80% switch them on at least once a month.
In its survey Preply determined that half of Americans used closed captions with the top reason (cited by 72% of respondents) being that subtitles make dialogue easier to understand.
As Chapman lays out in IndieWire, the causes behind muddled dialogue are many and might vary between person to person. For some, the problem is the design of modern televisions; the majority of which place internal speakers at the bottom of the set instead of facing towards the audience, causing significantly worse audio quality. Other issues are caused by sound designs optimized for theatrical experiences, which can result in compressed audio when translated to home.
“A lot of people struggle to hear dialogue now, so turning on closed captioning to decipher what people are saying has become a no brainer move,” he says.
An article in British broadsheet The Guardian also focuses on the issue of hard-to-hear dialogue which is a known issue in the industry, according to sound mixer Guntis Sics (Thor: Ragnarok), who is quoted in the piece.
Where once actors had to project loudly towards a fixed microphone on set. more portable mics has allowed a shift towards a more intimate and naturalistic style of performance, where actors can speak more softly — or, some might say, mumble.
“Antony Hopkins on Thor spoke like a normal human being, whereas on a lot of other films, there’s a new style with young actors — it’s like they just talk to themselves. That might work in a cinema, but not necessarily when it gets into people’s lounge rooms,” Sics says.
The Guardian’s Katie Cunningham also suggests sound mixes have become more complicated — fine for the 22.2 speakers of Dolby Atmos in a theater but indistinct when played back through a TV’s tiny and tinny speakers.
“When sound is mixed with the best possible audio experience in mind much of that detail can be lost when it’s folded down to laptop speakers, or even your television. It’s often the dialogue that suffers most.”
If you haven’t invested in an expensive speaker set up at home then reliance on the TV’s speaker output alone “could leave you with a subpar experience.”
Of course, the volume of foreign language shows and the phenomenal popularity of some of them — from Squid Game to Money Heist — demands subtitles, but even English-language shows seem too hard for many Americans to understand.
British comedies and dramas that aren’t the usual period dramas like The Crown are often acted with authentic local accents. Peaky Blinders (Birmingham), Derry Girls (Northern Irish) and even contestants on reality TV shows like Love Island are called out. As is Irish Oscar-winning drama The Banshees of Inisherin.
“If people get used to using subtitles where it’s basically required, it becomes a matter of habit to keep them in use even when watching American productions,” says Chapman.
In her article, she states the captioning services market in the US as valued at nearly $170 million in 2022. Studios however often outsource the work to companies like Rev, which in turn has 75,000 international freelancers on its books for transcription work.
Some studios issue very specific subtitle requirements. Netflix’s style guide includes rules like a limit of 42 characters per line, a set reading speed of up to 20 characters-per-second for adult shows (up to 17 for children’s programs) and an emphasis that “dialogue must never be censored.”
To prepare for live events like awards shows, captioners are given a script in advance of everything from the teleprompter — except for the names of the winners. When people ad-lib or give their acceptance speeches, the captioners are working from scratch.
“The person gets up and thanks someone with a very complicated name. We take a guess at it, but we’re going to spell it wrong. That’s bound to happen,” says Heather York, VP of marketing for captioning company Vitac.
Streamers often ask for subtitles in up to nine languages before their shows drop, creating a new challenge for service providers.
“We’ve got to pivot with our workflows, with our resources,” says Deluxe senior VP Magda Jagucka. “That process to bring non-English original content to global audiences requires multiple translation and adaptation steps.”
AI is already being used to give a first pass at transcription, with human editors then going through to make corrections — but there are current limitations.
“There’s a lot of nuance, and the audio-visual translation isn’t really just based on text,” says Jagucka. “When you’re thinking about AI, it goes through that textual base, but translators get our cues from the sound, from the visual, from the picture, from the tonality of the dialogue and the actors acting, as well.”
It is another instance where AI is a tool to assist rather than replace humans, at least at this stage.
Pat Krouse, VP of operations at Rev, tells THR, “AI is really helpful where it speeds up … moving from a pure typist to an editor and a proofreader, and eventually a summarizer. It makes humans focus on higher value things, as opposed to just pure typing work.”
A new survey finds 50% of US consumers use captions and subtitles while viewing content most of the time, making them an essential tool.
February 8, 2024
Posted
February 7, 2024
Why Captions Are Now (Almost) Essential in Video Content Consumption
TL;DR
Captions and subtitles are an essential tool for individuals with hearing loss and, for a lot of people, they’re a constant on TV and video screens. A recent survey showed that 50% of Americans watch content with captions and subtitles most of the time.
There are many reasons for this increase in caption viewing – advances in technology, societal expectations and changes in the way programs are broadcast which this article explains.
If entertainment trends of dim lighting, loud background music, and muddled audio continue, it’s likely that the use of subtitles will only increase in popularity.
No longer considered an optional feature, captions and subtitles have become an essential part of content creation. Recent data has shown that younger generations overwhelmingly prefer to watch content with subtitles on. So popular have captions and subtitles become that a third of Americans think subtitles should be the default on streaming services and cable TVs, while 26% think they should be the default at movie theaters.
A survey of 1,260 Americans conducted by online language learning platform Preply found half of TV viewers watching content with subtitles most of the time, with younger (Gen Z) demos much more likely to be frequent users (70%). Millennials are also more likely to use the feature than the average respondent, at 53%. Older respondents, including Gen X and Baby Boomers, were actually the groups least likely to be frequent subtitles users.
One reason for the younger skew is that this is the generation that grew up with streaming sand social media and have become accustomed to watching TikTok, Instagram, or YouTube videos where subtitles and captions can be automatically generated.
Social media also influences how Gen Z consumes movies, TV shows, and other video. According to the survey, 74% of Gen Zs watch content in public on their mobile devices, meaning that captions and subtitles are a prerequisite if you want to follow your favorite show in a noisy environment.
Another reason cited for turning on the captions is that it’s become harder to hear the dialogue in shows and movies than it used to be. One reason for this, explains language services company Vitac, is that in movie productions, professional sound mixers calibrate audio for traditional theaters with large speaker systems to deliver a wide range of sound. But when that same content is streamed on a TV, smartphone, or tablet, the audio gets compressed to carry the sounds through much smaller speakers. Adding to this, the thinner design of today’s flat-screen TVs forces manufacturers to locate speakers in less-than-ideal positions (the sides, the back) that direct sound away from the viewer and can muffle character dialogue and on-screen actions.
The issue could also be being exacerbated by the production itself. Sasha Urban at Variety notes a recent phenomenon of film and TV releases such as The Batman and Euphoria using visuals so dark that viewers can barely tell what’s happening. Whether this is due to changing director taste or the limits of home entertainment systems, Preply confirmed that a huge 78% of Americans in its poll have difficulty hearing dialogue due to loud background music in films and TV shows, leading 55% of respondents to agree that it is harder to hear on-screen dialogue than it used to be.
When it comes to productions being overall not as well lit, 44% of Americans agree that recent productions are using darker visuals than past ones. Not only that, but 35% agree that actors and TV personalities are talking faster than they used to.
Appetite for Global Programs
The rise in popularity (and easy availability) of foreign language content on streaming platforms is another reason for the increase in caption and subtitle usage.
A 2021 report showed that non-US shows accounted for nearly 30% of the demand for TV in the US, with top content coming from the UK, Japan, Korea, and India. The trend has continued with shows like Squid Game, or Money Heist gaining popularity in recent years.
And even when a show is in English, that doesn’t always mean that it’s easy to understand or follow along with what people are saying. Shows from the UK contain regional accents, slang, and expressions that are unfamiliar to some viewers. Preply’s list of “The Hardest to Understand TV Shows” includes a number of UK-based shows, including Peaky Blinders, Derry Girls, Downtown Abbey, and Bridgerton among its rank.
Topping its list of hard to understand actors is Tom Hardy (Venom), Sofia Vergara (America’s Got Talent), Arnold Schwarzenegger, and Sean Connery, with Johnny Depp coming in fifth.
Pros and Cons
For viewers, using subtitles has clear pros and cons. Being able to follow along with the dialogue visually helps them understand the plot (74%), hold their attention on the screen (68%), and not rewind as frequently after missing things said (55%), which overall enhances the viewing experience.
However, subtitles also come with some cons. Splitting their attention from the visuals of the content makes 40% of viewers worried that they’re missing things. In fact, more than one in five Americans find subtitles more distracting than helpful.
And which streamer gets ranked best for its subtitling feature? Per Preply, Netflix is in the number one spot, with Amazon Prime coming in second and Hulu taking bronze.
The Vision Pro Is One Step Removed From Reality — Is That a Bad Thing?
TL;DR
Mixed reality — or spatial computing — is still an experiment, but Apple has spent billions of dollars developing the Vision Pro using passthrough technology that allows the wearer to see the real world while wearing the goggles.
The Apple Vision Pro may be a marvel of modern industrial design, but what will the killer app for its mixed reality be, if there is one at all?
The psychological effects of experiencing virtual and mixed reality for long periods have not been properly examined, but the evidence to date doesn’t bode well, in part because of how socially isolating it can be.
As Apple releases its new mixed reality system Vision Pro the media tech industry is pondering what it is for. No-one knows the answer, and probably not Apple, but given its track record in defining new categories in consumer electronics, interest in its approach and capabilities are high.
Apple’s entrance into VR has symbolic weight, because the company has had so much influence on computers and phones, Microsoft exec and VR pioneer Jaron Lanier writes in The New Yorker.
Apparently Apple CEO Tim Cook “knows” that VR (or spatial computing) is the future of computing and entertainment and apps and memories, according to Nick Bilton at Vanity Fair.
VR has long been an established industrial technology, used for designing cars and to train surgeons in new procedures, for example. It has also been used by artists to explore the nature of consciousness, relationships, bodies, and perception, writes Lanier.
In between the two extremes lies a mystery: “What role might VR play in everyday life? The question has lingered for generations, and is still open.”
Lanier considers the Vision Pro to be a virtual reality device, one that allows users to see the real world around them overlaid with 3D virtual objects. That’s because video of the user’s surroundings are streamed — almost live — and displayed onto a high resolution screen.
As Shira Ovide at the Washington Post explains, “When you strap on the Vision Pro, you can watch a movie through the screen on your face and see your living room around you. You can pull up a recipe app through Apple’s headset and position virtual cooking timers above your pots as you follow the instructions.”
She says, “But you’re not seeing the real world. You’re seeing a nearly live streaming video of your living room or kitchen with apps superimposed on there.”
Director James Cameron explained to Bilton that the imagery in the Apple Vision Pro looks so real because it is writing a 4K image into users’ eyes. “That’s the equivalent of the resolution of a 75-inch TV into each of your eyeballs — 23 million pixels,” he said, later adding that he thinks the product is “revolutionary.”
Bilton listed a number of problems with the product — none of which were insurmountable. For instance, the unit’s $3500 price tag could be subsidized by Apple if it wanted with “as much financial impact as Cook losing a nickel between his couch cushions.”
It’s not the weight or the size because V2.0 will improve on this, or how Meta, Netflix, Spotify, and Google are currently withholding their apps from the device: “Content creators may come around once the consumers are there, and some, like Disney, are already embracing the device, making 150 movies available in 3D, including from mega-franchises like Star Wars and Marvel,” Bilton notes.
No, what bothers Bilton about the Vision Pro is just how good the experience is. Clearly wanting to keep getting invites to interview Apple bosses and get behind closed doors previews, Bilton says that every other routine computing experience — and even the actual world round us — pales besides the hyper-real version of it viewed through Cupertino’s new googles.
“In the same way that I can’t imagine not having a phone to communicate with people or take pictures of my children, in the same way I can’t imagine trying to work without a computer, I can see a day when we all can’t imagine living without an augmented reality.”
This is because with the Vison Pro you “actually feel like the person is in front of you and you can reach out and touch them,” he gushes. “I saw the world around me. I didn’t feel closed off or claustrophobic. I left the Apple offices… and when I opened my laptop, a relatively new computer, it felt like a relic pulled from the rubble of a Soviet-era power plant.”
Around 180,000 people have been tempted to buy a Vision Pro in the opening weekend of online preorders, according to figures quoted by Vanity Fair. Morgan Stanley anticipates that sales will ramp up to two million to four million units a year over the next five years, and it will become a new product category for the company. But others, like Apple supply chain analyst Ming-Chi Kuo, thinks it’s going to remain a niche product for some time.
David Lindlbauer, a professor leading the Augmented Perception Lab at Carnegie Mellon University, doubts that we’ll see people talking to their friends while wearing Vision Pro headsets at coffee shops in the near future. It’s simply strange to talk to someone whose face you can’t fully see.
“Socially, we’re not used to it,” Lindlbauer told Vox. “And I don’t think we even know if we ever want to get used to this asymmetry in a communication where I can see that I’m me, aware of the device, can see your face, can see all your mimics, all your gestures, and you only see a fraction of it.”
Lanier notes that research by a Stanford-led team has found evidence of cognitive challenges with such camera-based mixed reality. They shared their findings in a new paper that reads like a cautionary tale for anyone considering wearing the Vision Pro anywhere but the privacy of their own home.
“Your hands are never quite in the right relationship with your eyes,” he says. “Given what is going on with deepfakes out on the 2D internet, we also need to start worrying about deception and abuse, because reality can be so easily altered as it’s virtualized.”
As explained by Vox reporter Adam Clark Estes, a big problem with the passthrough video technology is that cameras — even ones as high-tech as those in the Vision Pro — don’t see the way human eyes see. The cameras introduce distortion and lack the remarkable high resolution in which our brains are capable of seeing the world. What that means is that everything looks mostly real, but not quite.
So, when the headsets came off, it took time for the researchers’ brains to return to normal, so they’d misjudge distances again. Many also reported symptoms of simulator sickness — nausea, dizziness, headaches — that will sound familiar to anyone who’s spent much time using a VR headset.
Tech analyst Benedict Evans noticed that in videos Apple released to developers last year to showcase what the Vision Pro can do: “Apple doesn’t show this being used outdoors at all, despite that apparently perfect pass-through. One Apple video clip ends with someone putting it down to go outside.”
Lanier’s concerns run deeper than user experience. He thinks virtual reality apps for the Vision Pro will come from all kinds of companies, and “could agitate and depress people even more than the little screens on smartphones.”
He is also worried about the engineering and support effort it will take to keep a system as complex as this always up to date.
More problematically, Lanier just doesn’t think users are going to want to be in virtual reality for anything more than specific experiences.
“Apple is marketing the Vision Pro as a device you might wear for everyday purposes — to write e-mails or code, to make video calls, to watch football games,” he says. “But I’ve always thought that VR sessions make the most sense either when they accomplish something specific and practical that doesn’t take very long, or when they are as weird as possible,” he says.
“Venture capitalists and company-runners talk about how people will spend most of their time in VR, the same way they spend lots of time on their phones. The motivation for imagining this future is clear; who wouldn’t want to own the next iPhone-like platform? If people live their lives with headsets on, then whoever runs the VR platforms will control a gigantic, hyper-profitable empire.”
But Lanier doesn’t think customers want that future. He says, “People can sense the looming absurdity of it, and see how it will lead them to lose their groundedness and meaning.”
To Lanier, living in VR makes no sense to who we are as human beings. “Life within a construction is life without a frontier. It is closed, calculated, and pointless. Reality, real reality, the mysterious physical stuff, is open, unknown, and beyond us; we must not lose it.”
Spatial computing has been adopted by Apple to describe its latest “wearable,” Vision Pro. But there are those wondering if this isn’t the metaverse by another name.
February 7, 2024
Posted
February 4, 2024
What Consumer Technologies Could (Will) Change Media and Entertainment?
TL;DR
Learn about the five key trends Lori H. Schwartz identified at this year’s CES in Las Vegas: health intelligence, autonomous intelligence, immersive intelligence, as-a-service intelligence and creative intelligence.
Schwartz is joined by Boaz Ashkenazy of Simply Augmented, who provides perspective on the way artificial intelligence advancements underlie all of the trends that are expected to shape M&E this year.
What does 2024 have in store for us? Lori H. Schwartz, StoryTech principal and NAB Amplify content partner, has some thoughts.
Here, Schwartz teams up with Simply Augmented CEO and founder Boaz Ashkenazy to share five trends she identified at CES 2024 and their implications for M&E.
Watch Schwartz and Ashkenazy’s full conversation, below in two parts, and read on to get their take on how technology will change the way we work and play.
The first trend may seem obvious, but it’s extremely significant, according to Schwartz: “There was really a horizontal wave of AI, across all the exhibitors and all the experiences” at CES. She considers it to be a super-trend because it undergirds each of the themes.
“We’re really talking about the impact of artificial intelligence on all of these things,” Schwartz explains.
Health Intelligence
“This really has to do with the impact that digital has had on healthcare in this last year,” Schwartz says.
For his part, Ashkenazy says, “There’s two parts of health that are super interesting to me: One is with input coming into the system, and one with input being generated from the system.”
“Using natural language to” control both the inputs and create the outputs “is the big breakthrough,” Ashkenazy predicts. Currently, “there’s a lot of manual effort to bring content into the system” but in the near future, “that’s going to be taken care of by the machine.”
How will this impact M&E? These new products and solutions “will need storytelling surrounding them so that people will actually use them and know what to do with them,” Schwartz explains. For example, she says, “What we’re starting to see is the rise of content studios inside of healthcare systems.”
Autonomous Intelligence
“We’ve all, of course, been hearing so much about autonomous vehicles, about robots, about all these… machines taking over,” Schwartz says. “But the truth is that a lot of this is going to be about automating repetitive and tedious tasks.”
Helper robots aren’t exactly new, but Ashkenazy notes that there’s a trend toward on-device AI for these machines, eliminating the cloud connection. He explains this enables the robots to have faster reactivity to external stimuli, whether using computer vision (integrated cameras) or other interactive elements.
“The more that we personalize and create these solutions that ease tasks, the more there’ll be opportunities to free up for higher level spending of time,” Schwartz observes.
Immersive Intelligence
Sphere in Las Vegas is perhaps the most famous example of this as of 2024. Once you see it, Schwartz says, “You realize that the future of display has changed forever.”
Avatars also fall into the category of immersive intelligence. “Being able to actually communicate intelligently with an avatar is one of the most exciting things,” Ashkenazy says. “And I think the only way that that can happen is through natural language and AI being able to be fast enough so that you’re getting responses in real time.”
Introducing these avatars also prompts the need for real-time translation “so that anybody with any language can get into these environments and have interactions in these environments that feel real and that feel compelling,” Ashkenazy says. He adds, “That has real implications for content creators, as they think about ‘What kind of intelligence do we want to bring into these avatars?’, based on the conversations that we might be having?”
Schwartz agrees, noting “We’re really heading towards this immersive world, AI-driven but also really highly dependent on content creators.”
As-a-Service Intelligence
“Products are now going to be moving towards a product-as-a-service-model, which means that instead of just buying a product, using it, and then throwing it out, when you’re done, you’re actually going to be subscribing to services through that product manufacturer,” Schwartz explains.
For his part, Ashkenazy says, “It involves IoT devices that live in all the spaces and all the buildings that we occupy. And you’re going to be seeing product-as-a-service inside spaces as well, where you know, the monitoring of devices that understand where we are, what temperature we want to be in, what the sound quality is like for spaces. All of that is going to actually have AI on device and very, very small chips, making calculations and helping our environments react.”
This trend is “going to be a little bit invisible,” Ashkenazy predicts. He says, “I think AI is going to be in every product. And we’re not going to know it, but it’s going to be helping us in different ways.”
Schwartz agrees: “I think the data point we heard at CES was that there’ll be 200 billion devices that are going to be connected to the internet over the next few years. And that each of them are going to be able to make decisions.”
This also means “brands …are going to have to become publishers. They’re going to have to generate content,” she says. “They’re going to have to bring value to what their product is, besides the actual product, so that they can actually deliver on a service. And the service could be, you know, as simple as news.”
Creative Intelligence
The final trend, creative intelligence, is “changing the model for how content is created,” Schwartz says.
“Last year was really a year of Gen AI and text,” Ashkenazy says, “and I think this year and the year to follow is going to be the year of multimodal,” referring to images, video, and audio content creation.
“It’s something to celebrate because it’s going to allow us to have a lot more iterative ideas early. And that’s going to mean…better designs at the end,” he predicts.
“It will also launch newer businesses that will be able to really see Gen AI as a tool. And it requires talent; it still requires that human input,” Schwartz says.
“The bottom line for me when it comes to creators and these tools is that smaller teams and individuals can do so much more now than they could before. It really gives creators superpowers,” Ashkenazy says.
Want more? We’re excited to share this exclusive “5 Trends” document, meticulously curated by Schwartz’s team of professionals in advertising, technology, and media.
Spatial computing is still an experiment, but Apple has spent billions of dollars developing the Vision Pro using passthrough technology.
February 4, 2024
The Algorithm Abides: Have You Noticed a Certain… Sameness in Society?
TL;DR
Writer Kyle Chayka examines the impact that social media newsfeeds have had on taste. He argues that by favoring engagement and removing human curation, we have created a more homogenous culture, both online and in real life.
He says that the equation of success with engagement has also acted as a new kind of gatekeeper, since anyone can publish content online, but trends largely dictate whether anyone will have the opportunity to experience it.
Despite the current ubiquity of algorithmic feeds, Chayka has a few ideas for how we can reclaim or develop our individual sense of taste.
Did the advent of the “curated” social media feed spell the end of good taste — or any taste at all? And is there any art still challenging us to consider what we actually like, as creatives shift from following their muse to gaming the algorithm?
In his new book, “Filterworld,” journalist Kyle Chayka considers the origins and inadvertent consequences of a society that consumes content, rather than appreciates art.
Chayka tells Esquire that the era of passive consumption (perhaps coupled with an aversion to anything that smacks of elitism) “makes the cultural landscape less interesting, but it also takes away this opportunity for us to encounter art that is really shocking or surprising or ambiguous.”
Coffee Shop and Content Conformity
This online phenomenon has had IRL consequences. “[T]he coffee shop aesthetic was kind of the canary in the coal mine. It was the most visible symbol of this homogenization that was happening over the Internet,” Chayka recalls to Lizzie O’Leary, host of Slate’s “What Next: TBD.”
(If you don’t know what he’s talking about, Google “hipster coffee shop in [your city/town].” Chances are good that it features white subway tiles, mid-century modern-esque furniture, and a fiddle leaf fig tree.)
How did this happen? “Instagram and other feeds have really shaped how we see everything in the world. Like, all forms of culture has to, kind of, flow through them,” Chayka says.
“Algorithmic feeds and recommendations have kind of guided us into conforming to each other and kind of having this homogenization of culture where we all accept the average of what everyone’s doing,” Chayka explained to Fresh Air’s Tanya Mosley.
These recommendations are acting on “us in two different directions,” he says. “For us consumers, they are making us more passive just by, like, feeding us so much stuff, by constantly recommending things that we are unlikely to click away from, that we’re going to tolerate, not find too surprising or challenging. And then I think those algorithmic feeds are also pressuring the creators of culture, like visual artists or musicians or writers or designers, to kind of shape their work in ways that fits with how these feeds work and fits with how the algorithmic recommendations promote content.”
Although creators now have a way to bypass traditional human gatekeepers (the editors and critics and producers), Chayka says they’ve traded a regime dictated by personal tastes and connections for one ruled by engagement metrics.
“On the internet, anyone can put out their work, and anyone can get heard,” he says. “But that means to succeed, you also have to placate or adapt to these algorithmic ecosystems that I think don’t always let the most interesting work get heard or seen.”
Capitalizing on the Algorithm
While consumers are at the mercy of “the algorithm,” let’s not forget that these feeds are managed by corporations that tweak them to benefit their metrics (and their bottom line).
Chayka cites Netflix’s home page as a prime example of how a company leverages its algorithm to serve you content that they want you to watch.
“There’s this problem called corrupt personalization, which is the appearance of personalization without the reality of it,” Chayka explains. “Netflix is always changing the thumbnails of the shows and movies that you are watching in order to make them seem more appealing to you.”
Unfortunately for them, Chayka says, “An academic did a long-term study of this by creating a bunch of new accounts and then kind of giving them their own personalities. …what this academic found was that Netflix would change the thumbnails of the shows to conform to that category that the user watched, even if the show was not of that category. …the algorithm, in that way, is kind of manipulative and using your tastes against you.”
Chayka is worried that this is set to get even worse as gen AI proliferates. “AI is kind of promising to just spit out that average immediately… It’ll take in every song, every image, every photograph and produce whatever you command it to. But that output will just be a complete banal average of what already exists.”
Riding the wave of early ‘00s nostalgia is a longing for oddity and sense of serendipity that was a hallmark of the pre-newsfeed Internet. Is there any way to regain it?
“People are starting to get bored of this whole situation,” he tells Mosley. That gives him hope. He predicts: “As users start to realize that they’re not getting the best experience, I think people will start to seek out other modes of consumption and just build better ecosystems for themselves.”
(We built it, so we can break it, I suppose?)
Chayka points to the EU’s regulatory efforts (including the Digital Services Act) as a step in the right direction for breaking the absolute power of the algorithmic newsfeed. Short of that, in the US, we could stand to break up some of the Big Tech monopolies.
“I think increased competition — like if we can break down the monopolies of Meta and Google — would actually lead to a wider diversity of experiences,” Chayka told O’Leary.
In the interim, Chayka suggests to Esquire’s Jon Roth: “One answer to Filterworld, to the dominance of these algorithmic feeds, is to find those human voices. Find tastemakers who you like and really follow them and support them and build a connection with those people.”
You can also choose to completely eschew social media, as Chayka did for a period of time.
However, Chayka found “the Internet was no longer designed to function without algorithmic feeds,” as he writes for the New Yorker. “… Blogs and other Web sites whose function was to aggregate headlines or highlight trends, like the original Gawker, had vanished in favor of a network of feeds. The New York Times app, which in the absence of Twitter became my primary way of checking the news, featured a ‘For You’ tab, much like TikTok’s, that suggested articles based on ones I’d clicked previously. …. But that kind of algorithmically guided consumption was exactly what I was trying to avoid. What I was after, most of all, was a little bit of the feeling I’d had online in the early days of the Internet: a sense of creative possibility and even of self-definition.”
So how do you develop a sense of personal taste? He tells Roth “[t]astemaking is almost just being more conscientious about cultural consumption, being more intentional in the way that we’ve become totally intentional about food, right?”
Chayka wishes that we’d translate our foodie culture to the arts: “I would love it if people took more pride in going to a gallery, going to a library, going to a concert series at a concert hall. I think those are all acts of human tastemaking that can be really positive.”
Addressing the charge of elitism, Chayka says, “It’s just about creating a human connection around a piece of culture that you enjoy, and that should be open to anyone.”
Navigating the Creator Economy: Social Commerce Is Everything
TL;DR
Transforming social media platforms from mere engagement spaces to dynamic marketplaces, social commerce stands out as one of the most significant trends for marketers to watch out for in 2024.
With digital video consumption at an all-time high, advertisers are increasingly investing in creator-led content, recognizing its power to drive consumer behavior and sales.
Influencers are pivotal in connecting brands with consumers, utilizing authentic content to drive both online engagement and in-store traffic.
Marketing strategy agency Aspire’s new report, “The State of Influencer Marketing 2024,” casts a spotlight on the formidable influence of social commerce.
Magda Houalla, Aspire’s director of marketing strategy, predicts that word-of-mouth marketing on social media will be the primary way that brands obtain new customers over the next few years.
Social commerce stands out as one of the most significant trends for marketers to watch out for in 2024. Social media platforms are evolving beyond their traditional roles as spaces for engagement and community building, morphing into dynamic marketplaces where discovery and purchase happen seamlessly.
Led by creator content, digital video consumption has reached an all-time high, and advertisers are taking notice. Currently valued at $250 billion, Goldman Sachs forecasts that the burgeoning creator economy is expected to grow to a whopping $480 billion by 2027. This year alone, according to a new report by IAB and TalkShoppe, 44% of marketers plan to increase their investment in creator-led content by an average of 25%. And as the creator economy continues to expand, agencies and other experts are chiming in with winning strategies for brands to harness the power of influencer marketing.
The rise of social commerce is a fundamental evolution in how consumers interact with brands online. It underscores the growing importance of authentic, influencer-driven content in guiding consumer decisions, making social commerce a critical area of focus for brands aiming to capitalize on the creator economy’s explosive growth.
The Impact of Social Commerce
The trajectory of social commerce in the United States is on a remarkable ascent, with revenue projected to leap from $67 billion in 2023 to over $144 billion by 2027, according to Insider Intelligence. This significant growth underscores the shifting dynamics of consumer shopping behaviors, increasingly influenced by the digital landscape and the persuasive power of social media platforms.
Central to this transformation is the role of creator-led content, which has proven especially impactful among Gen Z consumers. Data from Morning Consult reveals a compelling trend: 53% of Gen Z shoppers report making a purchase after watching a product review video on social media. Furthermore, the influence of specific content formats like “product hauls” and “Get Ready with Me” videos is undeniable, driving 40% and 37% of these consumers, respectively, to make purchases.
The rise of social commerce represents a paradigm shift in the retail ecosystem, blurring the lines between social networking and e-commerce. As consumers increasingly look to trusted influencers for product recommendations and insights, the impact of social media on purchasing decisions becomes more pronounced. This trend not only reflects the growing importance of social commerce in the broader creator economy but also emphasizes the need for brands to leverage these platforms and partnerships strategically to engage with and captivate the next generation of consumers.
Strategic Insights on Social Commerce
Aspire’s sixth annual report, “The State of Influencer Marketing 2024,” recently unveiled by the marketing strategy agency, casts a spotlight on the formidable influence of social commerce.
Instagram continues to reign as the most popular channel for influencer marketing among brands, the report found, followed by TikTok. “This year, we saw a record number of brands run TikTok campaigns on Aspire, with a 26% increase in the number of TikTok campaigns year-over-year. This will continue to be the trend in 2024, with brands saying they’ll invest more into Instagram and TikTok this coming year.”
Magda Houalla, Aspire’s director of marketing strategy, encapsulates this trend, predicting, “Word of mouth marketing on social media will be the primary way that brands obtain new customers over the next few years.”
“We’ve seen a lot of creators talk about Stanley, and I think that they do an exceptionally good job at is fostering community and kind of creating this urgency to buy product,” she notes. Of particular interest, she says, is how influencers on TikTok and elsewhere were actually driving in-store traffic as Stanley cups were quickly stripped from store shelves.
Houalla also identifies what she calls the “Queen Bee strategy.”
“This is where you have one macro creator post a hero piece of content … and then a bunch of micro creators that are redirecting towards that hero piece of content, kind of like a beehive,” she explains. In addition to Stanley, other brands using this social commerce strategy include makeup brand Ilya, and Samsung, which uses influencers to peddle high-end smart TVs.
“What I really like about Samsung is the quiet elegance, if you will, of the influencer programs that they’re doing, or the influencer work they’re doing specifically with brands in the home decor space,” she adds. “It is really subtle the way they talk about these brands because it is part of a broader ‘look at my living room makeover’ type content versus a shop.”
Influencers Play a Crucial Role in Social Commerce
Beyond driving sales, influencers play a crucial role in fostering community around brands. They create spaces where followers can discuss, share, and engage with products, turning individual purchasing decisions into collective experiences. This sense of community not only strengthens brand loyalty but also amplifies the impact of marketing campaigns. The relationship between influencers and their audiences is built on trust and relatability, making their endorsements more effective than traditional advertising.
Social commerce is increasingly not just an add-on feature but a central component of many brands’ digital strategies, notes Jessica Deyo at Marketing Dive. “Starting first with creators when it comes to advertising is definitely the wave of the future,” Ali Fazal, VP of marketing for creator management platform Grin, tells Deyo.
Influencers are playing a pivotal role in this shift, acting as the bridge between brands and consumers in the social media space. Their ability to showcase products in a relatable, engaging manner is transforming the shopping experience from a transactional process into an interactive and social event.
“If you look at the heart of influence, you’d be foolish to ignore the underlying component of what makes influencer marketing so successful; a uniquely informed and authentic perspective that audiences can trust within a like-minded community,” powerhouse agency Ogilvy counsels in its latest report, “2024 Influence Trends You Should Know About.”
Livestreaming is one surefire strategy for creators to build authenticity and trust, Ogilvy advises. “Through livestreams, creators can demonstrate products authentically, dispelling fears of staged and deceptive marketing,” the report reads. “Consumers believe the authenticity, as they can probe their creators to use the product and answer questions right in front of their eyes. It’s the shopping center juicing demos of the 90s but behind the protective barrier of your phone screen.”
As we navigate the ever-evolving landscape of social commerce, it’s clear that this trend represents more than just a shift in consumer behavior — it’s a fundamental change in the way brands interact with their audiences. The rise of live-streaming and the strategic emphasis on creators for advertising underscore a new era in digital marketing, one where engagement, authenticity, and community take center stage.
The integration of social commerce into digital strategies marks a significant pivot for brands. It’s a move towards more genuine, influencer-driven marketing approaches that resonate deeply with today’s consumers. This shift not only enhances the consumer experience but also opens up new avenues for brands to forge meaningful connections and drive tangible results. As social commerce continues to grow and shape the digital marketplace, its impact on both online engagement and in-store traffic reaffirms the power of influencer-led content in the modern marketing mix.
The cultural impact a creator has is already surpassing that of traditional media, but there’s still a stark imbalance of power between proprietary platforms and the creators who use them. Discover what it takes to stay ahead of the game with these fresh insights hand-picked from the NAB Amplify archives:
Experts have long charted the convergence of traditional linear TV with streaming video and that moment has as good as happened. The streaming business model has gone from predominantly subscription to a mix of pay TV, free and ad-supported, much like the TV landscape of old with the principal difference being in the internet delivery infrastructure. The result is that streamers and networks are in a fight to the death to avoid fragmentation, reduce churn and turn a profit. There’s plenty of research pointing media streamers towards what audiences want, but no-one, except perhaps Netflix, has cracked open the path to profitability.
“We’re amid a linear to streaming transition, and it’s being accelerated by a pending content drought from the Hollywood strikes,” Mike Proulx, VP and research director at Forrester, told Adweek. “Just like many of us are experiencing hybrid work, we are also experiencing hybrid television.”
According to Adweek’s Bill Bradley, hybrid TV is leaving consumers “confused and increasingly penny-pinched” as networks and streamers fight for ad dollars and attention through rebrands, bundles, ad tiers and content cross-pollination.
“It’s kind of this messy middle right now,” Proulx added.
The American market is saturated, with 123 million households subscribing to at least one streaming service, according to recent data from Kantar. Its study highlighted what has been apparent for many months now that customers prize ‘value for money’ above all else before signing-up to a new video streaming service. For the first time the eagerness to watch a specific title is not the most important factor to acquire new viewers.
Another key takeaway from Kantar’s report, conducted from October to December 2023, is that over half of US households now use a Free Ad-Supported Streaming Television (FAST) service. “FAST continues to be the fastest growing streaming type,” advised the researcher.
Diversifying content is another way to provide additional value. Prime Video, AppleTV, Peacock, and Max offer live content such as news and sports. Per Kantar, sports helped drive the growth of both Prime Video and ESPN+, which saw the largest jump in paid streaming market share in the last quarter of 2023. It is also the reason Netflix has spent $5 billion on rights to WWE over the next decade in what is unlikely to be its only grab of live sports rights.
One practical approach to enhance value is allowing the creation of multiple profiles within a single account. This feature, aimed at engaging multiple household members, has led to a 13% increase in paid subscriptions compared to the previous quarter, per Kantar.
Another recent study indicated that 90% of consumers seek greater control over their streaming experience. This includes a preference for customized packages, allowing them to pay only for content that interests them, rather than having access to an entire library that they may not fully explore. The report, commissioned by Amdocs, found that 82% of Americans wanted a single app to access all their streaming subscriptions, thereby simplifying the search for content across various platforms.
The Battle for Ad Dollars
Arguably the biggest battle in the drive to profitability is over advertising spend.
Linear TV fell below 50% of viewing for the first time in 2023, and according to market research company Insider Intelligence, retail media spend will soon pass traditional TV and double it by 2027.
“The proliferation of ad-supported streaming such as AVOD and FAST channels has shaken up the streaming landscape,” Dina Roman, SVP of global ad sales at Fubo, told Adweek.
Paul Verna, principal analyst at Insider Intelligence, told Adweek the number of streamers making more than a billion in advertising will jump from four in 2023 to seven in 2024. Pluto, Tubi and Peacock are set to join Hulu, YouTube, Roku and Amazon. Disney and Netflix aren’t far behind.
The Amdocs study found that younger generations (Gen Z and Millennials) are actually more receptive to ads than older demos. Some 38% of consumers are opposed to seeing more ads, while 45% are open to the idea.
In addition, experts expect more consolidation, bundling and licensing.
“What’s really interesting is some of the recent alliances or bundling between competitors in digital. That’s something that hadn’t happened much before,” Insider Intelligence’s Verna noted to Adweek. “Under pressure from consumer sentiment and consumer behavior, these rival streaming services are going to have to work together.”
Marianne Gambelli, president of ad sales, marketing and brand partnerships at Fox Corporation, expects to see further blurring of the lines between linear, digital and streaming.
“[This] will lead to an active period of creative ways of bundling and re-bundling premium video, which will transform the TV industry,” she said.
Back To the Future
It is transparent that the rising cost of living in North America as in Europe is forcing consumers to be more judicious in their TV spend. Streamers have it in their gift to offer economies of scale that consumers crave, potentially with access to move content at more favorable pricing.
But as several commentators note, this is another swing back to the cable delivered pay TV of yesteryear.
Jeremy Haft, CRO of Digital Remedy, told Brad Adgate at Forbes that the trend is, “Ironically, similar to how cable packages operate. In addition, these platforms are finding ad supported tiers offer potentially better margins and also an additional way to create cost-effective experiences for customers.”
Dan Goman, streaming and entertainment expert and founder of Ateliere Creative Technologies, told Consumer Affairs, “The only way forward is the bundle, which might not bode well for consumers as it in its mature form inevitably reduces choice and access to only the content we love most.”
While we are still in the early stages of the reinvention, industry veterans understand where this is headed. He warned, “The bloated bundle with three of the channels you actually want to watch and 300 channels you don’t really care about.”
Striking such deals is also complex at corporate level. Companies see less revenue per user when adding customers through promotions and bundles compared to direct sales, the Wall Street Journal reported in October 2022. Netflix and Warner Bros for example will have to share revenue with Verizon.
Verizon SVP Erin McPherson, per the WSJ, says streaming bundles are happening “faster than we thought” and are “here to stay.” Verizon also has a separate bundling deal with Disney.
Verizon CEO Hans Vestberg has said that creating new types of bundles is a company priority. Another telco could act as the third party for the bundle between AppleTV+ and Paramount+
Another proposed tie-up is between Amazon and Roku.
As Hunter Terry, Head of CTV at Lotame, noted to Forbes, “Why spend $10-30 a month for three or more streaming services, to see movies and shows that you will never watch? Now is a time consumers are reevaluating what makes the cut in their lives. Streaming services are constantly focused on churn reduction. Bundling is one way to overcome this challenge by making it more accessible and more affordable for consumers to sign up to multiple platforms.”
Concern about generative artificial intelligence technologies seems to be growing almost as fast as the spread of the technologies themselves. These worries are driven by unease about the possible spread of disinformation at a scale never seen before, and fears of loss of employment, loss of control over creative works and, more futuristically, AI becoming so powerful that it causes extinction of the human species.
The concerns have given rise to calls for regulating AI technologies. Some governments, for example the European Union, have responded to their citizens’ push for regulation, while some, such as the UK and India, are taking a more laissez-faire approach.
In the US, the White House issued an executive order on Oct. 30, 2023, titled Safe, Secure, and Trustworthy Artificial Intelligence. It sets out guidelines to reduce both immediate and long-term risks from AI technologies. For example, it asks AI vendors to share safety test results with the federal government and calls for Congress to enact consumer privacy legislation in the face of AI technologies soaking up as much data as they can get.
In light of the drive to regulate AI, it is important to consider which approaches to regulation are feasible. There are two aspects to this question: what is technologically feasible today and what is economically feasible. It’s also important to look at both the training data that goes into an AI model and the model’s output.
1. Honor Copyright
One approach to regulating AI is to limit the training data to public domain material and copyrighted material that the AI company has secured permission to use. An AI company can decide precisely what data samples it uses for training and can use only permitted material. This is technologically feasible.
It is partially economically feasible. The quality of the content that AI generates depends on the amount and richness of the training data. So it is economically advantageous for an AI vendor to not have to limit itself to content it’s received permission to use. Nevertheless, today some companies in generative AI are proclaiming as a sellable feature that they are only using content they have permission to use. One example is Adobe with its Firefly image generator.
2. Attribute Output to a Training Data Creator
Attributing the output of AI technology to a specific creator — artist, singer, writer and so on — or group of creators so they can be compensated is another potential means of regulating generative AI. However, the complexity of the AI algorithms used makes it impossible to say which input samples the output is based on. Even if that were possible, it would be impossible to determine the extent each input sample contributed to the output.
Attribution is an important issue because it’s likely to determine whether creators or the license holders of their creations will embrace or fight AI technology. The 148-day Hollywood screenwriters’ strike and the resultant concessions they won as protections from AI showcase this issue.
In my view, this type of regulation, which is at the output end of AI, is technologically not feasible.
3. Distinguish Human- From AI-Generated Content
An immediate worry with AI technologies is that they will unleash automatically generated disinformation campaigns. This has already happened to various extents — for example, disinformation campaigns during the Ukraine-Russia war. This is an important concern for democracy, which relies on a public informed through reliable news sources.
There is a lot of activity in the startup space aimed at developing technology that can tell AI-generated content from human-generated content, but so far, this technology is lagging behind generative AI technology. The current approach focuses on identifying the patterns of generative AI, which is almost by definition fighting a losing battle.
This approach to regulating AI, which is also at the output end, is technologically not currently feasible, though rapid progress on this front is likely.
4. Attribute Output to an AI Firm
It is possible to attribute AI-generated content as coming from a specific AI vendor’s technology. This can be done through the well-understood and mature technology of cryptographic signatures. AI vendors could cryptographically sign all output from their systems, and anyone could verify those signatures.
This technology is already embedded in basic computational infrastructure — for example, when a web browser verifies a website you are connecting to. Therefore, AI companies could easily deploy it. It’s a different question whether it’s desirable to rely on AI-generated content from only a handful of big, well-established vendors whose signatures can be verified.
So this form of regulation is both technologically and economically feasible. The regulation is geared toward the output end of AI tools.
It will be important for policymakers to understand the possible costs and benefits of each form of regulation. But first they’ll need to understand which of these is technologically and economically feasible.
Technologist Jaron Lanier argues that reconceiving AI as a social collaboration opens up new strategies for long-term economics and safety.
February 11, 2024
Posted
February 1, 2024
Cinematographer Cristina Dunlap’s Real/Surreal Approach for “American Fiction”
TL;DR
DP Cristina Dunlap talks about the critically acclaimed satire “American Fiction,” from the logistics of shooting a feature film in less than 30 days to working with first-time feature director Cord Jefferson.
Dunlap turned the tight shoot schedule into a signature style for the film by using longer Steadicam shots to generate coverage in a jazzy musical manner that suited the central character’s middle class affectation.
The film was shot on the ARRI Alexa Mini LF paired with Tribe7 Blackwing lenses in a widescreen 2.35 aspect ratio.
Dunlap created a custom LUT that aligned with her visual concept for the film in collaboration with Phil Beckner at PhotoKem, which was then fine tuned on set with DIT Mattie Hamer.
The pivotal scene in terms of narrative in the film American Fiction is when disgruntled academic Monk (Jeffrey Wright) sits down to write a book that is deliberately far removed from his own reality.
It’s also when the film “suddenly kind of takes a left turn into surrealism that hadn’t been in the film before,” says director of photography Cristina Dunlap. Speaking to Patrick O’Sullivan on the Wandering DP podcast, she says, “So a lot of what I built the world around was that scene. It was how far are we going to push the surrealism because it comes back again at the end of the film. It’s a satire but it’s also a really grounded family drama as well. It’s a really moving story at times. And also funny. And then there’s the surrealist element. So it was kind of trying to find what the tone was going to be and how to weave that throughout the entire film without it feeling like a bunch of different movies mashed together.”
Cord Jefferson’s adaptation of Percival Everett’s novel Erasure has been nominated for an Oscar in five categories, including Best Picture and Best Actor, having previously won the People’s Choice Award at Toronto International Film Festival, two Golden Globe nominations, and five nominations at the 29th Critics Choice Awards, including Best Picture.
It is Jefferson’s feature directorial debut so Dunlap came prepared with a look book and suggestions, taking care not to overwhelm or steer him in the wrong direction.
“I want to hear what you’re thinking,” she told him, as she relates in an interview with fellow DP Lawrence Sher, ASC for the ShotDeck: Shot Talk podcast. “We talked a lot about different references and movies, and because it’s so subjective, what a funny shot looks like just to really get on the same page. Cord was such an open book, he asked a lot of questions. Like, why does that look funnier than being a long lens? I was able to pull up different shots from different movies and sort of show him what I meant.”
The central character’s nickname, Thelonious “Monk,” gave Dunlap a clue as to the style of camerawork, but her decision to use extensive Steadicam was also a practical response to a tight 26-day shooting schedule that included a ton of locations and scenes with many cast members that would usually require extensive coverage.
“I knew we didn’t have time to cover everybody, the way you might with a longer schedule,” she told the Frame & Reference podcast. “So we orchestrated these shots [with Steadicam Xavier Thompson] that were pretty Steadicam heavy, where you’re flowing from one room into another room and coverage rotates around.
“Being that the main character’s name is Felonious Monk, we knew we wanted there to be this like rhythm and jazzyness to the way the camera was moving.”
She adds, “I didn’t want to shoot it like a comedy where you’re just an a wide and you see everything. I really tried to watch the rehearsals and then we’d have an idea of what we were going to do so that the camera was always flowing and moving through people and panning to reveal. Having that sort of slow is really important to us – it felt almost musical moving through everybody because there’s such a rhythm and that editing in the acting. And tonally it’s such an all over the place movie that I really wanted there to be some because consistency.”
Elaborating on the style to Matt Mulcahey at Filmmaker Magazine, she said, “We never wanted it to feel chaotic or loose, because Monk is so composed and tightly wound. I wanted to always feel like there was a sense of control, except for in two moments, both times with his mother. She’s one of the only things that can make him actually lose composure and show his internal world to the outside.”
Dunlap shot on the ARRI Alexa Mini LF paired with Tribe7 Blackwing lenses in a widescreen 2.35 aspect ratio. Collaborating with Phil Beckner at PhotoKem, she created a LUT that aligned with her visual concept, which was then fine tuned on set with her DIT, Mattie Hamer. In an interview for the ARRI website, she recounted the challenges of deciding on the film’s aspect ratio.
“Since all of our locations were practical, I realized that I was often seeing the ceilings if I was as wide as I wanted to be in order to highlight Monk’s isolation or distance from his family members. While the film is a comedy it’s also at its core a heartfelt family story and there’s a lot of emotion going on in Jeffrey’s face. Every twitch of an eyebrow has meaning, and you feel that. So, I wanted to be close to him while having that space which worked best in the 2:35 aspect ratio.”
Dunlap has been working as a DP for 20 years, starting out in music videos for the likes of Coldplay and Lizzo. It was a connection she developed on a music video set that led to her being one of a few cinematographers shortlisted to meet with Cord.
“It changed the course of my life, really,” she told Frame & Reference. “I mean I started young, and it taken me 20 years to get where I am now. I knew [Cord] was interviewing a lot of DPS and some with credits that were a lot more impressive than [mine] and I wasn’t sure I was gonna get it, but I think that that I was so passionate about the script that this came through.”
One of the references Jefferson gave his DP was a GIF featuring retired basketball player David Robinson at a game when an elderly white woman stands up in front of him and completely blocks him out of the shot.
“Cord told me how it’s a metaphor for the entire film,” Dunlap explained to Filmmaker Magazine. “I don’t know that he intended for us to use it visually, but when we were blocking the scene where Issa Rae [reads an excerpt from her character’s novel at the Massachusetts Festival of Books] it was absolutely the perfect moment to use that shot. It’s Jeffrey’s character watching everything he’s up against and everything he finds irksome about the book world unfold before his eyes, then a white woman stands up and completely obscures him and take over the frame with wild applause. I’ve never had a director give me a GIF as a reference before, but that shot ended up being one of the most talked about in the movie.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 13, 2024
Posted
January 31, 2024
Watch This: Refik Anadol, “AI Is an Extension of My Mind”
WHY YOU SHOULD WATCH
Hear directly from one of art’s most innovative and experimental creators, Refik Anadol.
His work combines data and aesthetics in fascinating ways and can be seen in museums and galleries as well as public art installations, such as Sphere in Las Vegas.
Anadol views generative AI as an artistic collaborator, not just another tool.
Anadol is the director of Los Angeles’ Refik Anadol Studio and also serves as a visiting researcher and lecturer in UCLA’s Department of Design Media Arts.
In January, Anadol revealed his latest work, Living Archive: Nature, in Davos, Switzerland, during the 2024 World Economic Forum, according to Artnet. It’s also the debut work from Refik Anadol Studio’s new Large Nature Model, which Artnet reports is an ‟open-source generative A.I. model dedicated to nature” and ‟trained on data from National Geographic, the Smithsonian Institute, CornellLab, the Natural History Museum in London, and the Conservation Research Foundation Museum, as well as data his team has personally collected.” The data was collected from ecosystems around the world using LiDAR, photogrammetry, and captured ambisonic audio and high-resolution visuals.
‟Our vision for the Large Nature Model goes beyond being a repository or a creative research initiative. It is a tool for insight, education, and advocacy for the shared environment of humanity,” Anadol told Design Boom.
“Refik Anadol: Unsupervised” uses an AI model trained on 180,000 artworks from MoMA’s collection to produce a stream of digital images.
March 11, 2024
Posted
January 30, 2024
It’s the Remix: Sarah Sze and the Role of AI in Art
Artist Sarah Sze shares her perspective on the intersection of generative AI and the fine arts, as well as how we should think about artificial intelligence as creators, on the Possible podcast.
WHY YOU SHOULD LISTEN
Sarah Sze says, “As a visual artist… your whole purpose is not to be predictable.” She believes this gives human an edge over LLMs and generative AI, which work by “predicting” the next right word (or combination of images).
However, that doesn’t mean Sze is adverse to utilizing AI. Rather, she thinks it’s a new tool to showcase creativity. “The driver is your question and how interesting your question is.”
When it comes to generative AI tools, Sze says, “All of these things are [a] medium for fine arts. I don’t think they replace… the artwork itself. We’re the ones asking the questions. We have to ask the right questions. And that’s what makes it interesting. And if we ask the wrong questions, it’s really not interesting and potentially dangerous.”
How does she feel about her work being fed into AI training data sets? “My work is just a continuum of everyone else’s work anyways,” Sze says. “I hope that I’m like talking to — I mean, it sounds like hubris — but I hope I’m talking to Vermeer. I hope I’m talking to Rembrandt. I hope I’m talking to, you know, Murasaki. … That’s what all artists are doing.”
Sze is a Boston-born contemporary artist known for multimedia, genre-defying work.
In 2023, the Guggenheim hosted Sze’s complex and site-specific “Timelapse” installation, which combined the building’s interior and exterior architecture with projections of digital imagery and sculptural elements. The New York museum also showcased her “Untitled (Media Lab)” and “Timekeeper” pieces to round out the collection.
One of contemporary art’s most innovative creators, Anadol considers generative AI as his artistic collaborator.
January 28, 2024
Posted
January 28, 2024
Deepfakes, Disinformation, Data Leaks: Being Online Is… Not Great
TL;DR
The World Economic Forum highlights the dangers of algorithmic-driven social media feeds and deepfakes in its 2024 Global Risks Report, citing misinformation as the main issue impacting global politics and security over the next two years.
With elections coming in the US, India, Mexico, Indonesia, the UK, and the EU in 2024, monitoring online activity for dis- and misinformation is essential — but unlikely to work.
False information and societal polarization are linked, the WEF reports, with potential to amplify each other. There are moves to counter, regulate and monitor social media networks in the works, but these could be “too little, too late.”
In 2024, we will face a grim digital dark age, as social media platforms transition away from the logic of Web 2.0 and toward one dictated by AI-generated content, says Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge. Writing for Wired, she says online trust will reach an all-time low thanks to unchecked disinformation, AI-generated content, and social platforms pulling up their data drawbridges.
Her view is echoed in a new report by the World Economic Forum, which highlights risk of AI-generated mis- and disinformation in exacerbating a cost-of-living crisis and socio-political polarization.
The WEF’s 2024 Global Risks Report is based on the views of 1,500 global risks experts, policy-makers, and industry leaders. It finds that the world’s top three risks over the next two years are false information, extreme weather, and societal polarization. Read the report here.
The threat posed by mis- and disinformation takes the top spot in part because of just how much open access to increasingly sophisticated technologies may proliferate, disrupting trust in information and institutions.
“The boom in synthetic content that we’ve seen in 2023 will continue, and a wide set of actors will likely capitalize on this trend, with the potential to amplify societal divisions, incite ideological violence, and enable political repression,” said Saadia Zahidi, MD and head of the Centre for the New Society and Economy at the WEF.
What’s more, false information and societal polarization are linked, with potential to amplify each other. Zahidi said, “Polarized societies may become polarized not only in their political affiliations, but also in their perceptions of reality. That can have a profound impact on many crucial issues ranging from public health to social justice and education to the environment.”
These trends are occurring at a time of heightened economic hardship for many people around the globe. Together, this “potent mix” of economic distress, false information, and societal divisions can create challenges for many societies, “providing fertile ground for continued strife, uncertainty, and erratic decision-making,” the WEF warns.
This has broad repercussions for the long-term outlook. A decade from now, according to the WEF’s Global Risks Report, the top three risks are all related to the climate emergency: extreme weather, change to Earth systems, and biodiversity loss. Mis- and disinformation stays high on the agenda at number five, followed by other adverse outcomes of AI technologies at number six, and involuntary migration at number seven, while societal polarization also stays in the top 10.
In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks, the Forum has launched the AI Governance Alliance.
The aim of the Alliance is to unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.
Benjamin Larsen, the WEF’s Lead on AI and ML, says, “Sustained dialogue lays the groundwork for greater cooperation and a potential reversal of digital fragmentation.”
Neff laments the shut down in access to user data on social media sites like Twitter or Facebook. “Companies have rushed to incorporate large language models into online services, complete with hallucinations (inaccurate, unjustified responses) and mistakes, which have further fractured our trust in online information,” she says.
To clean up online platforms and prevent the excesses of polarization she calls for the adoption of the STAR Framework (Safety by Design, Transparency, Accountability, and Responsibility) that she says would ensure that digital products and services are safe before they are launched; increase transparency around algorithms, rule enforcement, and advertising; and work to hold companies both accountable to democratic and independent bodies, and responsible for omissions and actions that lead to harm.
The EU’s Digital Services Act is another step in the right direction of regulation, but its capacity to ensure that independent researchers can monitor social network platforms will take years to be actionable. The UK’s Online Safety Bill — slowly making its way through the policy process — could also help, but again, these provisions will take time to implement.
Until then, Neff says, “the transition from social media to AI-mediated information means that, in 2024, a new digital dark age will likely begin.
Redefine the Future of AI Content Delivery: Join Us February 29 for the (Second) NABiQ Live Challenge!
TL;DR
New for this year, “NABiQ Deep Dive,” is a series of virtual challenges and live workshops leading up to the dynamic in-person innovation sprint and creative networking event in Las Vegas.
Facilitated by innovation consultant, certified design sprint master and startup coach Maria Halse Duloquin, this 60-minute online knowledge exchange will explore strategies for harnessing the power of AI to provide real-time personalization, customizable content, and improved content discovery.
Gain clarity on concrete challenges and opportunities for content delivery and workflows. Help map out challenges and opportunities for AI by sharing your insights and experience, while meeting — and learning from — other industry professionals.
This 60-minute online knowledge exchange will be facilitated by innovation consultant, certified design sprint master and startup coach Maria Halse Duloquin. The Deep Dive session will delve into topics ranging from real-time personalization and customizable content to providing consumers more accurate recommendations. The session will also include an exploration of what we do and don’t understand so far about AI content delivery, as well as how to embrace the future of content delivery. Insights from this Deep Dive will inform the on-site NABiQ brainstorm sessions at NAB Show, running April 13-17 in Las Vegas.
NABiQ stands out as an unparalleled education and networking opportunity to collaborate, share ideas and overcome industry challenges. Structured like a hackathon, in-person participants form small groups, tackling specific challenges and presenting their innovative solutions for challenges around the three show pillars, Create, Connect and Capitalize.
This year, the dynamic innovation sprint and creative networking event is breaking new ground by introducing live challenges on NAB Amplify leading up to the main event. The digital, three-part series of Deep Dives topics revolve around the future of AI including content creation, content delivery and audience engagement and interaction.
Join the Capitalize community for NABiQ Deep Dive: Drive the AI-Era of Audience Engagement on NAB Amplify, scheduled for March 28 at 1 p.m.
March 11, 2024
Posted
January 28, 2024
This Is Actually a Very Self Aware AI-Generated Documentary
TL;DR
“The Wizard of AI” by British artist, animator and video essayist Alan Warburton is potentially the first AI-generated documentary, exploring the impact of generative AI on creativity and the arts.
The film navigates our current Epoch of “Wonder-Panic,” reflecting the mixed emotions of awe and anxiety that AI evokes within the creative community.
Warburton employed a range of AI tools like Midjourney, Stable Diffusion, and DALL-E 3, alongside Adobe’s creative suite, to craft this visual essay.
Commissioned for the ODI Summit 2023 by the Data as Culture program, the short film challenges conventional documentary filmmaking and delves into ethical, aesthetic, and legal considerations of AI in art.
The project has sparked significant discussion on the future role of AI in art and design, emphasizing the need for a balanced approach to technology that considers both its potential and pitfalls.
Diving into the murky hopes and fears surrounding generative AI, The Wizard of AI by British artist, animator and video essayist Alan Warburton stands out as an exploration of how the burgeoning technology is reshaping creativity. This visual essay, largely crafted by artificial intelligence, ventures into what might be the first AI-generated documentary, offering a nuanced narrative that captures the complex emotions AI evokes within the creative community.
“Using creative workflows unthinkable before October 2023, [Warburton] takes us on a colorful journey behind the curtain of AI — through Oz, pink slime, Kanye’s ‘Futch’ and a deep sea dredge — to explain and critique the legal, aesthetic and ethical problems engendered by AI-automated platforms,” the ODI states about the project. “Most importantly, he focusses on the real impacts this disruptive wave of technology continues to have on artists and designers around the world.”
Through the lens of a hoodie-wearing faceless “AI Collaborator,” Warburton guides viewers through what he calls “our current epoch of Wonder-Panic.” The meticulously crafted narrative blends a variety of visual styles that pay homage to the rich histories of comic books, animation, visual effects, and cinema, embarking on a journey that oscillates between the marvels of AI’s new aesthetic possibilities and the sobering recognition of its implications for creative practices.
“I hope that I strike enough of a balance between wonder and panic that I can represent good actors on both sides,” he comments.
Commissioned as a reflection on the cultural impacts of generative AI, Warburton’s work is a critical and provocative examination of the current state of “wonder-panic” surging through creative communities. It challenges viewers to contemplate the dualities of AI — the excitement of uncharted artistic territories and the apprehension about the future of human creativity.
The AI Filmmaker’s Toolkit
Crucially, The Wizard of AI was produced exactly one year after the release of Midjourney v4, a tool Warburton views as a watershed moment for visual cultures and the creative economies.
“The animation was done over an intense three-week period where the updates to the tools I was using were significant — historic, even,” says Warburton. “Videos that I generated for inclusion [on October 20th] were generated again in early November (just days before the Summit), with improvements in quality analogous to the kinds of improvements we saw in digital cameras between 1995 and 2000. This meant that as I developed the film I was utilizing the most advanced generative AI tools, within hours or minutes of them becoming available.”
The film is a product of both creative vision and a practical application of AI tools. Central to its production was Runway Gen 2, which was used to generate the “AI Collaborator” video clips that guide viewers through the narrative.
For the visual content, Warburton employed Midjourney, Stable Diffusion, and DALL-E 3 to produce the wide array of still images that give the film its distinctive look. The Wizard of AI also features animated elements such as 3-second fish loops created with Pika. The narrative is further enriched by synthesized speech, with TikTok used for creating detective dialogues, and HeyGen for animating a talking detective head.
To ensure high quality visuals, Warburton used Adobe Photoshop AI for image expansion and Topaz Gigapixel AI for upscaling, ensuring clarity and detail. Adobe After Effects played a crucial role in bringing together these diverse elements, stitching them into a coherent visual narrative.
The Cultural Ripple Effect
Warburton’s visual essay serves as a mirror, reflecting the complex emotions and debates surrounding AI’s role in art and creativity. At the ODI Summit 2023, where the film was first unveiled, it became a focal point for discussions on how AI is reshaping the landscape of visual cultures and creative economies.
Warburton’s narrative navigates through the exhilarating possibilities and the unsettling uncertainties that AI introduces to the creative process. “The film is deeply insightful and humorous — prepare to make the phrase ‘Wonder-Panic’ a part of your AI vocabulary,” the ODI notes. This duality encapsulates the awe inspired by AI’s potential to push artistic boundaries and the anxiety over its implications for traditional creative roles.
“Generative AI is something that will have a deep and permanent effect on the ‘culture industries’ — by which I mean curators, art institutions, art schools, design firms and so on. It’s not another trend, it’s a tectonic shift in the currency and culture of images that we can’t reduce to ‘deepfakes’ or ‘post-truth’ but to a relationship between humans and images. It’s an epistemological break!” says Warburton.
“Yet instead of boycotting, I’m playing in the sandbox and seeing what the tools tell me. I do this to demystify and educate, but also because no matter how succulent and seductive an AI image is, the real juice is in analysis, criticism and reflection.”
Steven Da Costa, chairman of Link Digital, praised Warburton’s work for highlighting the need for the creative community to come to terms with the changes AI brings. “Absolute genius and full of perspective we all need to become comfortable with, one way or another,” he remarked.
The Wizard of AI delves into the complex interplay between artificial intelligence and creative expression, highlighting the ethical and aesthetic questions that arise with AI-generated art. The project serves as a critical examination of the wonder and panic that permeates the creative community, and explores the potential for exploitation and the challenges of bias within AI systems, raising important questions about the authenticity and originality of AI-generated art.
The implications of “The Wizard of AI” extend beyond its immediate cultural impact, hinting at a future where AI plays a significant role in documentary filmmaking and creative processes. The film exemplifies how AI can offer new perspectives and storytelling techniques, potentially transforming the documentary genre. However, it also underscores the need for filmmakers to navigate the ethical terrain of AI use carefully, ensuring that the integration of AI into creative workflows enhances rather than diminishes the human element of storytelling.
Perfect safety is no more possible online than it is when driving on a crowded road with strangers or walking alone through a city at night. Like roads and cities, the internet’s dangers arise from choices society has made. To enjoy the freedom of cars comes with the risk of accidents; to have the pleasures of a city full of unexpected encounters means some of those encounters can harm you. To have an open internet means people can always find ways to hurt each other.
But some highways and cities are safer than others. Together, people can make their online lives safer, too.
I’m a media scholar who researches the online world. For decades now, I have experimented on myself and my devices to explore what it might take to live a digital life on my own terms. But in the process, I’ve learned that my privacy cannot come from just my choices and my devices.
This is a guide for getting started, with the people around you, on the way toward a safer and healthier online life.
The Threats
The dangers you face online take very different forms, and they require different kinds of responses. The kind of threat you hear about most in the news is the straightforwardly criminal sort of hackers and scammers. The perpetrators typically want to steal victims’ identities or money, or both. These attacks take advantage of varying legal and cultural norms around the world. Businesses and governments often offer to defend people from these kinds of threats, without mentioning that they can pose threats of their own.
A second kind of threat comes from businesses that lurk in the cracks of the online economy. Lax protections allow them to scoop up vast quantities of data about people and sell it to abusive advertisers, police forces and others willing to pay. Private data brokers most people have never heard of gather data from apps, transactions and more, and they sell what they learn about you without needing your approval.
A third kind of threat comes from established institutions themselves, such as the large tech companies and government agencies. These institutions promise a kind of safety if people trust them – protection from everyone but themselves, as they liberally collect your data. Google, for instance, provides tools with high security standards, but its business model is built on selling ads based on what people do with those tools. Many people feel they have to accept this deal, because everyone around them already has.
The stakes are high. Feminist and critical race scholars have demonstrated that surveillance has long been the basis of unjust discrimination and exclusion. As African American studies scholar Ruha Benjamin puts it, online surveillance has become a “new Jim Code,” excluding people from jobs, fair pricing and other opportunities based on how computers are trained to watch and categorize them.
Once again, there is no formula for safety. When you make choices about your technology, individually or collectively, you are really making choices about whom and how you trust — shifting your trust from one place to another. But those choices can make a real difference.
Phase 1: Basic Data Privacy Hygiene
To get started with digital privacy, there are a few things you can do fairly easily on your own. First, use a password manager like Bitwarden or Proton Pass, and make all your passwords unique and complex. If you can remember a password easily, it’s probably not keeping you safe. Also, enable two-factor authentication, which typically involves receiving a code in a text message, wherever you can.
As you browse the web, use a browser like Firefox or Brave with a strong commitment to privacy, and add to that a good ad blocker like uBlock Origin. Get in the habit of using a search engine like DuckDuckGo or Brave Search that doesn’t profile you based on your past queries.
On your phone, download only the apps you need. It can help to wipe and reset everything periodically to make sure you keep only what you really use. Beware especially of apps that track your location and access your files. For Android users, F-Droid is an alternative app store with more privacy-preserving tools. The Consumer Reports app Permission Slip can help you manage how other apps use your data.
Phase 2: Shifting Away
Next, you can start shifting your trust away from companies that make their money from surveillance. But this works best if you can get your community involved; if they are using Gmail, and you email them, Google gets your email whether you use Gmail yourself or not. Try an email provider like Proton Mail that doesn’t rely on targeted ads, and see if your friends will try it, too. For mobile chat, Signal makes encrypted messages easy, but only if others are using it with you.
You can also try using privacy-preserving operating systems for your devices. GrapheneOS and /e/OS are versions of Android that avoid sending your phone’s data to Google. For your computer, Pop!_OS is a friendly version of Linux. Find more ideas for shifting away at science and technology scholar Janet Vertesi’s Opt-Out Project website.
Phase 3: New Foundations
If you are ready to go even further, rethink how your community or workplace collaborates. In my university lab, we run our own servers to manage our tools, including Nextcloud for file sharing and Matrix for chat.
This kind of shift, however, requires a collective commitment in how organizations spend money on technology, away from big companies and toward investing in the ability to manage your tools. It can take extra work to build what I call “governable stacks” — tools that people manage and control together — but the result can be a more satisfying, empowering relationship with technology.
Protecting Each Other
Too often, people are told that being safe online is a job for individuals, and it is your fault if you’re not doing it right. But I think this is a kind of victim blaming. In my view, the biggest source of danger online is the lack of public policy and collective power to prevent surveillance from being the basic business model for the internet.
For years, people have organized “cryptoparties” where they can come together and learn how to use privacy tools. You can also support organizations like the Electronic Frontier Foundation that advocate for privacy-protecting public policy. If people assume that privacy is just an individual responsibility, we have already lost.
The WEF highlights AI-generated mis- and disinformation as the main issue impacting global politics and security over the next two years.
January 23, 2024
Generative AI for Content Creation: The (First) NABiQ Live Challenge
TL;DR
NAB Show introduces “NABiQ Deep Dive,” a series of virtual challenges and live workshops leading up to the dynamic in-person innovation sprint and creative networking event in Las Vegas.
Facilitated by innovation consultant, certified design sprint master and startup coach Maria Duloquin, the January 25th Deep Dive features a panel of industry experts and NAB Amplify senior editor Emily M. Reigart.
The free workshop will be followed by two more Deep Dives on February 29 and March 28, addressing AI in content delivery and audience engagement.
Join NAB Amplify for NABiQ Deep Dive with a series of virtual challenges as we gear up for NAB Show 2024, running April 13-17 in Las Vegas.
NABiQ stands out as an unparalleled education and networking opportunity to collaborate, share ideas and overcome industry challenges. Structured like a hackathon, in-person participants form small groups, tackling specific challenges and presenting their innovative solutions for challenges around the three show pillars, Create, Connect and Capitalize.
This year, the dynamic innovation sprint and creative networking event is breaking new ground by introducing live challenges on NAB Amplify leading up to the main event. The digital, three-part series of Deep Dives topics revolve around the future of AI including content creation, content delivery and audience engagement and interaction.
Deep Dive: Unlock Gen AI in Content Creation
Targeting the Create community, the first — free — workshop was held January 25 at 1:00 p.m. EST. Innovation consultant, certified design sprint master and startup coach Maria Duloquin facilitated the event, joined NAB Amplify senior editor Emily M. Reigart for a discussion on unlocking the potential of generative AI.
Participants identified their locus in the M&E universe, with significant representation for the education sector and studios/production work.
Attendees discussed what is understood so far about AI in M&E, along with what we don’t know, as well as how to embrace the future of AI in the industry. Participants raised concerns about AI ethics, legality and regulation, and how to educate ourselves and others about generative AI.
Providing a space for participants to share their insights and experience, and the opportunity to meet and learn from other industry professionals, this workshop helped to map out challenges and opportunities for AI.
Next at Bat
The workshop will be followed by two more Deep Dives on February 29 and March 28, addressing AI in content delivery and audience engagement.
NAB Show itself will feature four daily NABiQ brainstorming sessions, each followed by a happy hour pitch showcase for further networking opportunities. Insights from the Deep Dive sessions will inform the on-site NABiQ brainstorm sessions at NAB Show in Las Vegas.
Is This New Era of Spatial Computing Really… New? Or Are We Just Remaking the Metaverse?
TL;DR
Apple is preparing to bring to market its new head-mounted display Vision Pro which it describes as a spatial computer.
Spatial computing is being presented as different from the metaverse though the distinction is moot. It is one in which virtual experiences and content will interact with the physical world in new ways through spatial interfaces, and that in turn will change human-to-computer interactions and human-to-human interactions.
New head-mounted and hands-free internet gateways could mean the end of the smartphone era.
Today’s phrase is spatial computing, a term adopted by Apple to describe its consumer electronics “wearable,” Vision Pro. But as much as companies like Apple, Sony and Siemens might claim that this initiates a new era, there are those wondering if this might be the metaverse by another name.
So scarred is the tech industry by the failure of the metaverse to take off (and so synonymous with Mark Zuckerberg’s Meta has the name become), that the 3D internet and successor to flat, text-heavy web pages, appears to have been essentially rebranded.
Futurist Cathy Hackl offers this subtle distinction: “Meta is on a mission to build the metaverse, and Quest 2 is their gateway. Apple seems to be more interested in building a personal-use device. One that doesn’t necessarily transport you to virtual worlds, but rather, enhances the world we’re in.”
The term spatial computing has been around at least as long as the term metaverse but is being given a new lease of life by the second coming of augmented reality (AR), virtual reality (VR) or mixed reality (MR) glasses or goggles; the collective term for those acronyms is XR or eXtended reality.
Snap, Sony and Siemens are just some of the companies with new XR wearables due to launch over the next few months. Undoubtedly, all will be a step up in terms of comfort and tech specifications on the early round of such hardware which was led by Google Glass, Meta’s Oculus and Magic Leap.
Apple’s Magical Step Into the Metaverse
“The era of spatial computing has arrived,” said Tim Cook, Apple’s CEOpromoting the Apple Vision Pro. In the same sentence he then described it as having a “magical user interface [which] will redefine how we connect, create, and explore.”
Let’s get beyond the smoke and mirrors. There’s no “magic” in the Vision Pro other than a brand name for apps (think Magic Keyboard and Magic Trackpad).
The tech community has, however, been keenly looking toward Apple to bring such a product to market. Having defined and popularized categories for consumer tech, including the tablet and the smartphone, the best bet for XR wearables to go mainstream was always going to come from Cupertino.
Encounter Dinosaurs, a new app by Apple that ships with Apple Vision Pro, makes it possible for users to interact with giant, three-dimensional reptiles as if they are bursting through their own physical space.
One reason why Cook and others prefer the term spatial computing is because there is greater confidence that this iteration of the tech can better blend the actual and the digital world with seamless user interaction.
As Cathy Hackl put it, spatial computing is an evolving 3D-centric form of computing that blends our physical world and virtual experiences using a wide range of technologies, thus enabling humans to interact and communicate in new ways with each other and with machines, as well as giving machines the capabilities to navigate and understand our physical environment in new ways.
From a business perspective, says Hackl, it will allow people to create new content, products, experiences and services that have purpose in both physical and virtual environments, expanding computing into everything you can see, touch and know.
It is an interaction not based on a keyboard but on voice and on gesture. As Apple puts it, the Vision Pro operating system “features a brand-new three-dimensional user interface controlled entirely by a user’s eyes, hands, and voice.”
It’s not “Minority Report” just yet, but you can see where this is headed. Here’s Apple’s description: “The three-dimensional interface frees apps from the boundaries of a display so they can appear side by side at any scale, providing the ultimate workspace and creating an infinite canvas for multitasking and collaborating.”
Its screen uses micro-OLED technology to pack 23 million pixels into two displays. An eye-tracking system combining high-speed cameras and a ring of LEDs “project invisible light patterns onto the user’s eyes” to facilitate interaction with the digital world. No mention is made of having to sign away your right to privacy — this being a pretty invasive aspect of the technology. Do you want Apple to know exactly what you are looking at? If so, expect hyper-personalized adverts pinged to your Apple ecosystem when you do.
Or as Hackl — a tech utopian — writes: “AR glasses will turn one marketing campaign into localized media in an instant.”
Apple’s Competition
Such features are not exclusive to Apple. A new head-mounted display from Sony, designed in collaboration with Siemens and due later this year, also has 4K OLED Microdisplays and an interface called a “ring controller” that allows users to “intuitively manipulate objects in virtual space”. It also comes with a “pointing controller” that enables “stable and accurate pointing in virtual spaces, with optimized shape and button layouts for efficient and precise operation.”
The device is powered by the latest XR processor by Qualcomm Technologies. Separately, Qualcomm has unveiled an XR reference design based on the same chip that features eye tracking technology. The idea is that this will provide a template for third party manufacturers to build their own XR glasses.
The Sony and Apple head-gear are aimed at different markets. Both are hardware gateways to the 3D internet — or the metaverse, even if Apple studiously avoids referencing this and Sony only does so when talking about industrial applications.
Apple Vision Pro is targeting consumers, even if early adopters will have to be relatively well heeled to fork out the $3500 ($150 more for special optical inserts if your eyesight isn’t 20/20).
This Changes… Some Things
Chief applications include the ability to capture stills or video on your latest iPhone, which users will be able to playback in Spatial 3D (i.e. with depth) on their Vision Pro. The video and stills will appear as two dimensionally flat as viewed on any other device.
FaceTiming someone will also be possible in a new 3D style experience within the Vision Pro goggles. According to Apple, this “takes advantage of the space around the user so that everyone on a call appears life-size.” To experience that users will have the choice to choose their own “persona” (which Apple chose to differentiate from Meta’s colonization of the term “avatar”).
In addition, Apple had loaded Vision Pro with TV and film apps from rivals Disney+ and Warner Bros’ MAX (but not Netflix) to be viewed “on a screen that feels 100 feet wide with support for HDR content.” As a reminder, the screen is millimeters from your face.
Within the Apple TV app, users can access more than 150 3D titles, though details are not provided. It could be that these are experimental 3D showcase titles or stereoscopic conversions, in a revival of the fad a decade ago for stereo 3D content.
More significantly, Apple Immersive Video launched as a new entertainment format “that puts users inside the action with 180-degree, 3D 8K recordings captured with Spatial Audio.” Among the interactive experiences on offer in this format is Encounter Dinosaurs.
No details were given of how this content is created or at what production cost, but Sony’s new XR glasses are targeting the creative community.
Indeed, Sony is marketing its development as a Spatial Content Creation System and says it plans to collaborate with developers of a variety of 3D production software, including in the entertainment and industrial design fields. The device includes links to a mobile motion capture system with small and lightweight sensors and a dedicated smartphone app to enable full-body motion tracking.
In Sony speak, it “aims to further empower spatial content creators to transcend boundaries between the physical and virtual realms for more immersive creative experiences.”
Where Is This Headed?
Spatial computing unshackles the user’s hands and feet from a stationary block of hardware and connects their brains (heads first) more intimately with the internet.
Hackl thinks Vision Pro is the beginning of the end for the traditional PC and the phone.
“Eventually, we’ll be living in a post-smartphone world where all of these technologies will converge in different interfaces. Whether it’s glasses or humanoid robots that we engage with we are going to find new ways to interact with technology. We’re going to break free from those smartphone screens. And a lot of these devices will become spatial computers.”
She thinks 2024 will be an inflection point for spatial computing.
“Eventually you’ll have a spatial computing device that you can’t leave the house without,” she predicts, “because it’s the only way that you can engage with the multiple data layers and the information layers and these virtual layers that will be surrounding the physical world.”
She admits that right now “there’s a bit of chaos” and that Apple Vision Pro may not be the breakthrough everyone expects in its first iteration.
“To me, the announcement of Apple offers a convergence of the idea of seamless interaction, breaking through the glass and a transformation from social media-driven AI to personal, human AI,” she says. “Will all that happen with the release of Apple’s first headset? No, and I wouldn’t expect it to. That’s a lot to put on one company’s shoulders. But Apple is different from other headset makers which gives us an opportunity to see a different evolution of AR.”
“Presence:” Shooting Steven Soderbergh’s Ghost Story (From the Ghost’s POV)
TL;DR
Steven Soderbergh’s spooky ghost story “Presence” is shot entirely from the ghost’s perspective even though he previously thought point-of-view movies wouldn’t work.
Shooting under his usual DP pseudonym of Peter Andrews, Soderbergh guides his subjective camera into every corner of the house, opening up a number of questions that film theorists and film buffs will gorge on.
Steven Soderburgh’s latest film is a riff on the haunted house genre with the twist being that the spooky story is told from the ghost’s point of view.
In Presence a family moves into a new home only to recognize an unsettling feeling that something or someone else is also there. The script is by regular Soderbergh collaborator David Koepp, from an idea by the auteur and stars Lucy Liu, Chris Sullivan, Callina Liang, and Julia Fox.
The story may be familiar, but it is the filmmaker’s camerawork that makes the movie stand out from the horror pack.
“The camera drifts through spaces, hovers around actors, races up and down stairs, and looks out windows — usually in single takes that constitute the entirety of a scene,” describes Bilge Ebiri for Vulture. “Nobody can see this presence, but they do occasionally sense it.”
Lots of films will occasionally cut away to the ghost’s or killer’s or monster’s point of view for some visual flair and added tension. Ebiri notes that Dario Argento perfected the idea in his giallo classics. Sam Raimi turned it into the ultimate lo-fi aesthetic in his Evil Dead movies. Stanley Kubrick riffed on it in TheShining.
“The technique is nothing new,” says Ebiri. “But Soderbergh doesn’t use it as an occasional directorial indulgence, instead maintaining the ghost’s-eye perspective throughout the whole movie. The camera’s presence, the question of where it goes and why, and which characters it focuses on, thus all go from stylistic questions to narrative and thematic ones.”
Previously Soderbergh has been disparaging about creating narrative films told entirely from a first person perspective but he now appears to have changed his mind. During the post-screening Q&A at Sundance, as reported by TheWrap, he admitted:
“I had real questions about the choice that was at the center of this, because I’ve been very vocal about the fact that first person POV VR is never going to work as a narrative. They want to see a reverse angle of the protagonist with an emotion on their face experiencing the thing. I’ve been beating this drum for a long time — it’s never going to work.”
“The only way to do it was you never turn around,” Soderbergh continued. “It was really fun because there was no other plan. That’s it. You live or die by that.”
As TheWrap’s Drew Taylor observed, the Oscar-winning Ocean’s 11 and Traffic filmmaker is prone to experimentation, like shooting features on iPhones or self-releasing entire shows through his website.
“Presence is another big stylistic swing that connects. Ironically, it’s not hard to imagine the film becoming a hit on VR headsets.”
As is now his custom, Soderbergh is also behind the camera, acting as his own DP under the pseudonym Peter Andrews, which adds another meta layer of analysis for film theory buffs.
“The unseen figure of the ghost becomes an expression of the filmmaker’s power over the frame, evoking the sadistic-voyeuristic nature of cinema in general and genre cinema in particular,” says Ebiri.
He used the newest iteration of the Sony DSLR, which was small and light enough for him to carry for those extended takes, and wore martial arts slippers while sliding around the house to make as little noise as possible.
This presented two challenges, he told Amy Taubin at Filmmaker Magazine. “It probably weighed 10 or 12 pounds, which is fine unless you have to hold it out in front of you for eight minutes. Then it gets hard. A couple of takes are that long, especially the penultimate shot, where there’s a lot going on. My arms are turning to cement.”
The other challenge was the stairs. “I was in that house for a month and there’s no version of me going up and down those stairs without having to look at my feet. What that meant was I had to do a series of rehearsals where I got a sense of where to aim the camera and where I could just feel the level of tilt and pan that would result in the correct composition without looking through the lens. But sometimes I’d get it wrong, and I would ruin a take halfway through.”
The director also edits (billing himself as Mary Ann Bernard) and told the post-screening Sundance audience, as reported by The Hollywood Reporter, that for him this is the most creative part of the process.
“There’s no analogous sort of tasks in any other art form. You’re bringing it all together, all the elements,” he said. “Sound, picture — it is the best. It’s the reward for being on set. The power of it still amazes me. How you can change the intention of something just by reordering shots or holding them at a certain length, pulling out lines, giving a line to somebody else.”
Soderbergh has apparently experienced some sort of unsettling behavior in his own house where a previous owner had been killed.
“The circumstances surrounding it were very murky, but everybody on the block was convinced that this was not a suicide, as the police described it, but that her daughter had killed her,” he revealed to Filmmaker. “As the son of a parapsychologist, I found that fascinating.”
Also during the Q&A, he talked about the film’s climax and likely to be most talked about scene. This was scripted by Koepp and one that not even Soderbergh expected. Taubin describes the sequence as “a horrific close-up of an act of violence that mesmerizes the camera — just as horror films mesmerize their audience.”
Anyone who has seen Michael Powell’s Peeping Tom may be able to guess what Soderbergh was trying to achieve.
“When I read that scene for the first time, I thought, “Wow, that’s, that’s rough,” and was immediately aware of the sort of philosophical point that you just made. What I didn’t know was how amplified and extreme it would be when we were actually there doing it. My decision ultimately was that it had to be excruciatingly intimate. Which is, you know, 14 inches from a very personal form of murder.”
He also admitted in his interview with Taubin that he is not an aficionado of horror films; “You see a lot of horror films that feel very much like single-use plastic, where you don’t really think about them again,” but what attracted him to the genre was “this whole idea of directorial presence. It’s what the whole thing is built on.”
Nonetheless, critics don’t consider Presence a frightening movie noting the absence of conventional jump-scares.
THR critic David Rooney judged it “an enormously satisfying watch for haunted house movie fans, favoring sustained anxiety over big scares and practical effects over digital trickery.”
Variety’s Owen Gleiberman has problems with the conceit. He writes “we’re just watching a shoestring movie shot with a rather nosy and flamboyant visual style. All very stylish and percussive. But if he had made a version of this movie without the ghost-as-camera-eye conceit, it would have been more or less the same movie.”
He adds, that Presence amounts to “another half-diverting, half-satisfying Soderbergh bauble, only this time he’s the ghost in the machine.”
Director Steven Soderbergh discusses the production of and technologies used for Max’s limited series, “Full Circle.”
January 21, 2024
Like, Share, Subscribe, Attend: What’s on the Schedule for Our Creator Lab
The 2024 NAB Show will feature Creator Lab, a new show floor experience, helmed by Jim Louderback and Robin Raskin.
Creator Lab will offer interactive exhibits, expert panels, hands-on workshops and networking events that focus on creators, equipment, distribution channels and monetization.
Investing in the Creator Economy: Hear about the studios, VCs and entertainment companies making investments in the creator economy.
The Creator Ecosystem: Discover how agents, intermediaries and brands are working with creators.
Building on Top Creator Platforms: Discover how to build a billion-dollar media company via YouTube, TikTok, Vimeo, Roblox, Fortnite and other emerging platforms.
“We’re enormously excited to launch Creator Lab as a home base at NAB Show for this community — a hub with tailored education and networking, all in the center of the world’s largest showcase of the tech and tools that make content come to life,” says Chris Brown, EVP and managing director, NAB Global Connections and Events.
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 11, 2024
Posted
January 21, 2024
Navigating the Visual (and, By Extension, Ethical) Landscape of Generative AI
TL;DR
Generative AI tools like Midjourney and DALL-E 3 can unintentionally produce copyrighted content, challenging the originality and legal boundaries in media.
The use of copyrighted material in AI training raises significant ethical and legal concerns, with potential risks for creators and developers across the media industry.
Solutions include retraining AI models with licensed data and implementing filters for problematic queries, but these pose practical challenges in implementation and effectiveness.
AI Researchers Gary Marcus and Reid Southen emphasize the need for creators, developers, and stakeholders to collaboratively address these challenges, advocating for transparency, ethical development, and respect for intellectual property.
Imagine a world where iconic images from your favorite movies are replicated, not by fans, but by AI systems like Midjourney and DALL-E 3 — all without any direct instruction.
Generative AI tools like Midjourney and DALL-E 3 are capable of creating images that, while not directly instructed to do so, can be strikingly similar to well-known characters and scenes from popular media. This capability, though technologically impressive, poses a risk of infringing on intellectual property rights, inadvertently creating a conflict between AI innovation and copyright law.
This phenomenon, explored in depth by Gary Marcus and Reid Southen in IEEE Spectrum, opens a Pandora’s box of ethical dilemmas and legal quandaries, challenging the very notion of originality.
The issue came to head in early January, when Midjourney was accused of compiling a database of thousands of artists, including prominent names like Hayao Miyazaki, Matt Groening, and Keith Haring to train its text-to-image tool.
The database, which included various artistic styles and names, was allegedly used to mimic these artists’ styles, raising issues of copyright infringement and the ethical use of artists’ work in AI training. The revelations arrived amid an ongoing class-action lawsuit between a group of artists and Stability AI, Midjourney and DeviantArt, as It’s Nice That’s Liz Gorny reports:
“A leaked Google Sheet was circulating online on 31 December 2023 featuring visual styles like ‘xmaspunk,’ ‘carpetpunk’ and ‘polaroidcore’ and thousands of individual artist names. The list was allegedly put together by developers at Midjourney; screenshots shared by Riot Games’ Jon Lam (founder of advocacy moment Create Don’t Scrape) appeared to show developers discussing sourcing artists and styles from Wikiart and laundering datasets to avoid copyright infringement.”
Marcus and Southen detail how these advanced AI models can unintentionally produce content that closely resembles copyrighted material, providing concrete examples to illustrate the issue. “In each sample, we present a prompt and an output,” they write. “In each image, the system has generated clearly recognizable characters (the Mandalorian, Darth Vader, Luke Skywalker, and more) that we assume are both copyrighted and trademarked; in no case were the source films or specific characters directly evoked by name.”
Such instances of visual plagiarism are not deliberate actions by these AI models but are instead a byproduct of their training and operational mechanisms. Marcus and Southen explain that large neural networks, which form the backbone of these AI tools, “break information into many tiny distributed pieces; reconstructing provenance is known to be extremely difficult.” This fragmentation means that an AI tool trained on copyrighted material cannot discern between copyrighted and non-copyrighted inputs during its learning process, leading to outputs that inadvertently mirror existing copyrighted works.
This technical backdrop poses a complex challenge. On one hand, the ability of AI to generate such detailed and nuanced content is a testament to its technological advancement. On the other, it brings to light the imperative need for mechanisms that can distinguish between creative inspiration and copyright infringement.
Ethical and Legal Implications
The inadvertent production of copyrighted material by generative AI tools like Midjourney and DALL-E leads to a host of ethical and legal issues, as Marcus and Southen point out: “The very existence of potentially infringing outputs is evidence of another problem: the nonconsensual use of copyrighted human work to train machines.”
The potential legal ramifications are significant. The duo notes that “no current service offers to deconstruct the relations between the outputs and specific training examples,” meaning that it is challenging to trace the origins of AI-generated content back to specific inputs. This opacity in AI’s functioning can lead to legal complexities for creators who might unknowingly use AI-generated content that infringes on existing copyrights.
Marcus and Southen take a strong position on the issue, calling for a moral reckoning in the way AI technologies are developed and utilized, particularly in fields reliant on creative content. “In keeping with the intent of international law protecting both intellectual property and human rights, no creator’s work should ever be used for commercial training without consent,” they state.
Potential Solutions and Their Implications
In addressing the challenges posed by AI-generated visual plagiarism, Marcus and Southen propose several potential solutions, each with its own implications. Their suggestions focus on mitigating the risks of copyright infringement while maintaining the innovative momentum of AI in content creation.
One key solution offered by the authors is the retraining of AI models. “The cleanest solution would be to retrain the image-generating models without using copyrighted materials, or to restrict training to properly licensed datasets,” Marcus and Southen suggest. This approach directly addresses the root of the issue — the input data used in training AI models. However, the practicality of this solution is complex. Retraining AI models on a large scale requires significant resources and time and may impact the quality and diversity of the AI’s output, a concern for an industry that thrives on creative and diverse content.
Another proposed solution is the implementation of filters to screen out problematic queries. While this seems like a straightforward approach, its effectiveness in practice is debatable. Filtering algorithms would need to be sophisticated enough to recognize a wide range of potentially infringing content, which is a challenging task given the nuances and context-dependent nature of copyright law.
Marcus and Southen also highlight the importance of transparency and responsibility in AI development. They argue for a more ethical approach, stating, “Image-generating systems should be required to license the art used for training, just as streaming services are required to license their music and video.”
The authors acknowledge the challenges in implementing these solutions, noting that “neither [solution] is easy to implement reliably.” However, they emphasize the necessity of such measures, stating, “Unless and until someone comes up with a technical solution that will either accurately report provenance or automatically filter out the vast majority of copyright violations, the only ethical solution is for generative AI systems to limit their training to data they have properly licensed.”
The Role of Creators and Stakeholders
Creators, developers, and other stakeholders play a critical role in navigating the ethical landscape of AI, with a collective responsibility for shaping an AI environment that respects both creativity and legal boundaries, Marcus and Southen contend.
“Users should be able to expect that the software products they use will not cause them to infringe copyright,” they say, emphasizing the duty of developers to create AI tools that not only foster innovation but also safeguard against unintentional legal infringements.
The authors argue for a concerted effort to include artists and creators in the conversation about AI development, suggesting that their insights can lead to more ethically sound and legally compliant AI systems. This collaborative approach can help in establishing standards and guidelines that balance the innovative potential of AI with the rights and interests of creators.
Marcus and Southen also advocate for the development of software “that has a more transparent relationship with its training data,” suggesting that greater visibility into how AI models are trained can lead to more responsible use of these technologies. This transparency, they maintain, is crucial for maintaining trust among creators, users, and the public, ensuring that AI systems are used in a way that is both legally and ethically sound.
They call for a proactive stance from all parties involved in AI development and use. By embracing responsibility, collaboration, and transparency, they say, stakeholders can foster an AI environment that not only drives innovation but also upholds the core values of creativity and respect for intellectual property.
Midjourney produced images that are nearly identical to shots from well-known movies and video games. Right side images: Gary Marcus and Reid Southen via Midjourney. Click here to view a larger version.
Will GenAI tools like DALL-E help us survive and thrive as a creative species, or are they the death knell of creativity as we know it?
January 18, 2024
How “The Woman in the Wall” Merges Mystery, History and Memory
TL;DR
“The Woman in the Wall” stars Ruth Wilson as a woman grappling with the unresolved trauma of time spent in one of Ireland’s Magdalene laundries as an unwed mother, further complicated by the untimely death of two people connected to her incarceration in the local Mother and Baby Home.
The series intentionally weaves together genres (mystery, thriller, drama, comedy) as it tackles subject matter that would otherwise be too dark for many viewers.
The six-episode show was one of the first TV series to be shot using an ARRI Alexa 35 camera. DP Si Bell found its features to be extremely useful when shooting under challenging lighting conditions and a tight schedule.
Showrunner Joe Murtagh told the Washington Post, ‟[W]hen we’re talking about the Magdalene Laundries, we’re really talking mostly about the period between 1922 and 1996. And these were institutions for so-called ‘fallen women.’ Originally, that was meant to mean prostitutes, but over the course of several decades in Ireland, that came to include mothers who had children out of wedlock.”
The show centers on County Mayo resident and sleepwalker Lorna Brady (played by Ruth Wilson, who also served as an executive producer for the series), whose traumatic past is brought straight to her living room when she awakes to discover a corpse with a direct link to her time at the local Magdalene laundry.
British Cinematographer’s Robert Shepherd describes the series as “a six-part gothic thriller by Motive Pictures that combines history and in-depth research to create a sobering narrative.”
But that doesn’t fully capture the tone that creator and head writer Joe Murtagh envisioned for the BBC One drama. For one thing, “gothic” calls to mind “Wuthering Heights;” the series is set in 2015 and centers around a national scandal that continued into the late 20th century in Ireland. It is not historical fiction.
For another, Murtagh imbues the writing with plenty of dark humor. He tells the BBC, “Maybe that’s an Irish thing in general. It’s definitely my natural way of writing, just to go at it with some comedy. I also find that the dark humor, it’s weirdly more realistic — in my life experience anyway — than just doing a straight drama.”
For those who’d find that odd, Murtagh says, “I find that in the most horrific experiences in life, there’s always weird moments of humor and things that don’t quite belong. Someone saying something the wrong way, or it not coming out quite right — that I think is just realistic.
“So, I would say it’s a natural instinct. But at the same time, if I stop and think about it, it’s probably the perfect way to tell a story like this.”
‟To the extent it works, the show is a testament to patient and precise world-building,” Angie Han writes for THR. ‟Kilkinure might be fictional, but the portrait creator Joe Murtagh paints of it has the texture of reality — individuals with prickly personalities or idiosyncratic senses of humor, intricate webs of friendships and grudges, rumors and secrets tracing back years.”
Of course, the series’ genre bending doesn’t sit well with every viewer. A review in the Independent concludes: “For some, this will make the story more watchable — less grueling — but for others the introduction of cliché and the imposition of a murder mystery will feel crass.”
KQED’s Rae Alexandra puts it another way: ‟It’s not always an easy watch, but ′The Woman in the Wall‘ is consistently impossible to look away from — a degree of attention Ireland’s real-life laundry survivors have long deserved.”
Cinematography
Even though the subject matter is Stygian, cinematographer Si Bell, BSC, says, “We shot it at the end of the summer in 2022 in the bright and colorful village of Portaferry and there’s a lot of color there naturally, so we were kind of going against the darkness really.”
In addition to the setting, Bell explains, “We created quite a saturated grade and we tried to push that with the colors we used in the lighting and production design. The red room is all red. We used primary green light sometimes and we had vibrant production design and colored houses to give the show even more vibrancy.”
The production relied on the “versatility and handling” of the ARRI Alexa 35 4.6k Super camera, Bell says. For example, “It enabled us to use a pretty slow old Cooke zoom lens when we were doing night, high speed work, pushing the extended ISO using the enhanced sensitivity range. We were doing a lot of night scenes and we used ES ISO which was amazing. In terms of the sensitivity, it doesn’t get noisy when you bump up the ISO, so the biggest difference is how clean it is compared to other cameras.”
Bell found the camera’s viewfinder and flipout monitor to be “so clear, with high resolution and sharp.”
He was not the only crewmember who found the camera’s features to be useful. DIT Cel Bothwell-Fitzpatrick, for one, told British Cinematographer that the internal NDs and Enhanced Sensitivity EI options were crucial, and the remote access to the settings (via HI-5 focus handset or Camera Companion app) was extremely handy.
The Alexa 35 also features 17 stops, which Bell says was crucial because “[t]here were several scenes where the light massively changed outside. I was worried that when we got to the grade it was going to be clipped, and we weren’t going to be able to pull it back, but it was all there.”
Although Ireland’s natural light is famously moody and beautiful, the production often supplemented its interiors for a very specific look.
For example, a scene in episode one featured Lorna’s home lit in a way to simulate sodium-vapor lamp street lights via “Tungsten 12K Fresnels with an urban sodium paper gel,” courtesy of gaffer gaffer Seamus Lynch.
Bell says, “Seamus was amazing and the lighting setup that he rigged in the studio was very flexible. Everything was on motors and easily controllable and he also created different soft boxes, so we could flip between day and night super quick. We had large Tungsten lamps, plus lots of LED for the soft boxes and options of softer light.”
In another scene, Bell recalls, they “lit it with really big soft bounce on a machine outside the window where we basically put a 20 by 40 foot ultra bounce with two ARRI MAXs bounced into it. You could see the whole window from the inside as we positioned the bounce above it out of shot.” This Bell says, created “really soft, natural ambient coming in that balanced the exposure.”
Additionally, every location was rigged to achieve as natural lighting as possible, which was made more challenging because the sets featured hard ceilings.
Northern Ireland’s Yellowmoon was tapped for post work. Bell says, “We had a live grade set up on set and with Yellowmoon we created a LUT and tweaked the CDL shot by shot on set. We started this information in the grade making the process very streamlined.”
Streamlining was important because they needed to create three versions of the show: HDR Dolby vision grade, SDR and HLG. In addition to Yellowmoon’s skilled team, Bell says ARRI’s Color Science and ARRI Look File 4 were important to facilitate workflows.
Bell explains, “ARRI has separated the Color Transform from the creative look file and it’s all LOG to LOG. Therefore, as it’s not baking in the Color Transform, the process of creating HDR and SDR versions is more streamlined.”
Showrunner, writer and director Issa López discusses her approach to the new season of anthology series “True Detective: Night Country.”
January 18, 2024
Darkness and Light (But Mostly Darkness): Production on “True Detective: Night Country”
TL;DR
Showrunner, writer and director Issa López discusses the influence of Fincher’s “Seven” film on the new season of anthology series “True Detective.”
The new series has an undercurrent of the supernatural but it also layers in the politics of environmental change, of marginalized communities and, most clearly, male violence on women.
Shooting in the extended daylight and extended dark days of the frozen north required a new approach to lighting the show for DP Florian Hoffmeister.
Mexican horror filmmaker Issa López may not have been an obvious pick to spearhead Season 4 of True Detective, the latest in HBO’s anthology series, but it’s actually a perfect pairing.
López, who created, wrote and directed all six episodes of True Detective: Night Country, is best known for her 2017 crime film, Tigers Are Not Afraid, which earned rave reviews from critics and gained a cult following after its relatively small opening in the US.
“If I can bring back the feeling of two characters that are carving entire worlds of secrets within them, and trying to solve a very sinister crime in a very strange, eerie environment that is America, but it also doesn’t feel like America completely, and then I sprinkled some supernatural in it — I think we’re going to capture the essence of True Detective,” López told TheWrap’s Loree Seitz.
Season 4 of True Detective introduces the franchise’s first female detective duo in detectives Liz Danvers (Jodie Foster) and Evangeline Navarro (Kali Reis), former partners who reunite to investigate the disappearance of six men working at the Tsalal Arctic Research Station in small town Ennis, Alaska.
During lockdown Lopez had been developing her own murder mystery scripts when HBO came calling. “I believe they saw Tigers Are Not Afraid, which is very gritty and ultra-real and violent, but at the same time has elements of the supernatural and [is] very atmospheric,” she told Matt Grobar at Deadline. “So I think that [they saw] something in that movie, they were like, ‘Oh, this could be an interesting point of view for True Detective.’”
On developing her idea for the format created by Nic Pizzolatto she revealed that David Fincher’s Seven was a big influence. Like True Detective it has two different odd-couple characters who come together to solve a mystery.
“I’m sure that was one of the references that informed Pizzolatto’s writing, at least unconsciously, so I was thinking of Seven. It was two detectives, a forgotten corner of America with its own system of culture and rituals, and it just clicked massively with True Detective. It didn’t take a lot of effort.”
The new series shares with the first season an undercurrent of the supernatural but it also layers in the politics of environmental change, of marginalized communities and, most clearly, male violence on women.
“The environmental theme came when I started to understand the inner workings of northwest Alaska and the industries and the conflicts in the area,” she said to Grobar.
“You just start to create this town, and the forces that pull energy inside it. Mining is a huge deal in that area of Alaska, and there’s constant conflict around the benefits of a burgeoning energy industry, but at the same time, the damage that it creates in an environment where people need the environment to survive. So, it’s just rich grounds to create the story.”
After focusing on women that had been taken and killed in two of the four films López had previously directed, spotlighting the story of a missing Iñupiaq women felt like the “natural continuation” of her interests.
“It doesn’t matter if it’s Mexico, the US or Canada … this violence doesn’t care for borders,” López said.
When it came to casting, López knew she wanted at least one of the characters of the two main characters to come from a native community, and was impressed by Reis, a professional boxer who broke into acting with 2021 film Catch the Fair One. Reis is of Cherokee, Nipmuc and Seaconke Wampanoag ancestry.
“I knew that one of the characters had to be indigenous because I don’t believe in bringing agents of justice to fix the situation in the native community. I simply don’t believe in that,” López said. “It was a challenge because there were not indigenous stars in the tradition of ‘True Detective,’ but that needs to change,” she said.
“Now we have a Lily Gladstone [Killers of the Flower Moon] and now we have a Kali Reis,” López said. “This is the year that changes.”
López worked closely with Germany-based DP Florian Hoffmeister (Tár, Pachinko, Great Expectations) on each episode. Hoffmeister told Mia Funk and Halia Reingold at the Creative Process podcast why the story’s wider scope appealed to him.
“It’s about the transient nature of life up in the polar North,” he said, “Permanent settlement is only possible since the Industrial Revolution, because normally the indigenous cultures were living and communicating with the land in a whole different way.”
He describes his experience of filming in a region (Iceland stood in for Alaska) where for months on end daylight is restricted to just a few hours a day.
“Further north you get no sunlight at all. And it was an interesting creative decision to embrace how lighting has a whole different utility and necessity than just regular light. if I come home at nights here in Berlin, I might switch on a few lights but if you live in darkness, your relationship with lighting changes. If you live in darkness, you tend to over light.”
The locations in True Detective: Night Country included an ice rink and a police station, which he lit as if every light were on, “because you’re literally craving light, and you don’t want your workspace during the day to be moody and dark.”
Since the murder mystery genre tends to have moody lighting, this presented an interesting conundrum. “If you go to the supermarket, and it’s minus-20, you will keep your car running while you’re inside. Because if you switch the car off the engine might freeze,” he explained.
“So there’s a whole different way to deal with what we take very commonly as the achievements of our industrialized living environment. I wanted that to be reflected in the lighting. So [scenes] in the police station and ice rink are really bright, basically switched everything on.”
They started prepping the series in summer in Iceland, during which time it barely got dark because of the region’s “endless summer” but finished shooting in almost perma-darkness in the winter months.
“If you light at night in a snowfield, the first thing to burn and [overexpose] will be the snow. So the whole lighting chain outside had to be tackled differently,” he said. “I think there’s some really exciting footage where we shot right on the blink (of darkness) where we can still see and you get the scale of the landscape, but it’s almost disappearing into blackness.”
Hoffmeister also suggests that one of the themes of the show is this feeling of a disconnection. “It feels like it’s the end of the world because I think you have this disconnection between us and the environment. And the biggest contrast with the indigenous people that used to live there is that obviously they had to live connected with their surroundings, but we don’t. Not only are we disconnected from our environment, but we also disconnected from each other.”
This season of True Detective was primarily shot using an ARRI Alexa 35 with Panavision Ultra and Super Speeds, Hoffmeister told IBC365.
They also chose to shoot some scenes using a stereo rig — pairing two Alexa Mini LFs using Sigma glass, one sans infrared filter to capture the landscape. In post, they blended both feeds to create more of a sense of depth for some outdoor scenes.
“ARRI has introduced a new feature called Textures allowing you to can burn in parts of the look. So we built a LUT, and we took part of the LUT and built a texture, which was then burnt into the image,” Hoffmeister explains. (We refers to his collaboration with ARRI head colorist Florian ‟Utsi” Martin.) ‟I feel this is the closest in terms of workflow you can come digitally to photochemical image manipulation.”
In another interview, this time with Filmmaker’s Matt Mulcahey, Hoffmeister recalls telling Arri’s Martin that he hoped to achieve ‟really strong blacks and highlights that [popped] but maintained texture so that the highlights, when you switched on the lights, would almost feel like an electric guitar [wailing] in rock music. I really wanted them to pop. That created quite a steep contrast and I thought in the mids it would have made my life very, very difficult, because obviously the mids are where our faces lie. So, you want to be a bit gentler there. I didn’t want to force myself to constantly work with fill light to ease something that I’ve introduced to the picture. So, Florian built us a Texture that would influence exactly that middle part and would be slightly softer in the mids and also create a little bit of a noise that we would associate with grain. And, yeah, we burned it in.”
Critics have generally hailed the new season as a return to form for the anthology series. “The plot is unforgettable, even if the ultimate, gob-smacking denouement may test your credulity,” Carol Midgley reflected at The Times.
“Confidently helmed, stunningly shot and richly performed, it is spellbinding, bone-chilling and does just about the last thing you’d expect from True Detective four series in: it makes you want more,” wrote Rachel Sigee at Inews.
“Night Country is a brilliant inversion of the men-heavy, heat-oppressed, narratively bloated series that have gone before,” The Guardian’s Lucy Mangan found.
Hollywood and GenAI: I Think (Hope) This Is the Beginning of a Beautiful Friendship
TL;DR
There remains a fear that increased use of AI in Hollywood will lead to identical, emotionally hollow content.
Yet, as generative AI moves into production and experiments with AI for scripting unfold, the threat ofcompletely upending how we produce and consume entertainment seems overblown.
The most encouragingdevelopments with AI so far seem less likely to replace humans than to assist them.
The deepest fear that Hollywood creatives have about AI is that it will replace jobs from the industry and all the life from storytelling. The reality is that Hollywood creatives mostly believe that generative AI is nowhere near good enough to produce final product without huge human involvement.
That AI will impact profoundly on content creation is a given. Film historian David Thomson compares GenAI to the advent of cinematic sound.
But opinions differ as to the extent and value of GenAI’s impact.
Katie Notopoulos at Fast Company outlines the extremes. She quotes Edward Saatchi, founder of production company Fable Studio, predicting a future where there’s a “Netflix of AI” that allows viewers to pick from an array of customized episodes of their favorite shows.
“You could also speak to the television to say, ‘I’d like to have a new episode of the show, and maybe put me in it and have this happen in the episode,’” said Saatchi, whose company is developing an AI-generated animated series.
On the flip side, Adam Conover, writer and board member at WGA West, told Fast Company, “Maybe there will be some AI-generated chum that shows up on Twitch and people have it on in the background while they do their homework — but that’s not going to compete with movies.
“Movies are: ‘I want to go sit in the dark. I want to watch the hottest person in the world say the funniest things in the world and ride a real f***ing motorcycle off a cliff.’ That’s what people want.”
Expected AI VFX Efficiencies
Without doubt AI will increasingly come into play as a time (and cost) saving tool by automating and simplifying complex tasks, most notably in VFX.
Lon Molnar, chief creative officer of VFX company Monsters Aliens Robots Zombies (MARZ) tells Fast Company that smaller-budget movies and shows will have easy access to Marvel-quality effects — in five to 10 years.
That’s still a way off and in any case fits into the wider and ongoing trend that tech advances from digital cameras to YouTube have in ‟democratizing” filmmaking.
Around half of U.S. entertainment industry workers polled by YouGov and Variety Intelligence think that GenAI will be used for processes like sound effects, autocompleting code to assist in game programming and developing 3D assets and artwork for storyboards — within three years.
At a basic level, generative AI could be used to save money on expensive reshoots, even on the tightest of budgets.
With AI, you could “generate a video model based on all the footage from your scene, and then generate new shots based on the photography that you captured,” filmmaker Paul Trillo tells Fast Company. “That’s going to rewrite the rules of postproduction.”
However, the same YouGov/Variety poll found just 18% of U.S. entertainment workers believing that GenAI will be able to effectively write film and TV scripts anytime soon, ranking the lowest of any creative task.
Certainly, chatbots such as ChatGPT are capable of producing output in the manner of a screenplay. “It’s less plausible that AI can yet, soon or ever succeed at producing a complete, coherent and production-ready script without at least some human assistance,” said Variety’s Audrey Schomer.
It is more likely that existing Large Language Model-based AI tools can assist writers to speed script development, for example, exploring alternative storylines or generating ideas.
In this sense, “an LLM might better operate as a muse, brainstorming aid or sounding board,” suggests Schomer.
LLM-based tools could help writers rapidly ideate and iterate story concepts, including providing possible settings and scene locations; character names, identities and backstories; and plot points and narrative arcs.
At the same time, studios might experiment with using LLMs for ideation, generating basic concepts for pilots and movies that could be expanded into treatments and scripts, Schomer suggests.
Studio execs who might want to create more content for less money might be inconvenienced (in the short term at least) by the deal struck with the WGA and by reluctance to be hit by copyright law suits.
The fear remains, however, that the overriding economic impact of AI will inevitably lead to a future of mass-generated ersatz content, or as The Economist’s Alexandra Suich Bass puts it, “we’ll all be watching synthetic entertainment generated by robots and acted out by CG versions of beloved stars, a hollow version of the films we loved.”
Just as the internet led to an explosion of “user-generated content” being posted to social media and YouTube, generative AI will contribute to reams of videos proliferating online. Some predict that as much as 90% of online content will be AI-generated by 2025.
GenAI may result in more derivative blockbusters and imitation pop songs, but the technology could also generate original ways of storytelling. For instance, AI could be the catalyst for new types of personalized and interactive stories.
With de-aging tech, screenwriters can craft more ambitious time-skipping narratives — something we might see in Robert Zemeckis forthcoming feature Here, starring Tom Hanks and Robin Wright.
“With the ability to render convincing cityscapes, historical dramas can roam far beyond the handful of carefully scouted locations that usually serve as their sets,” Notopoulos suggests.
Cristóbal Valenzuela, who runs AI software development company Runway, call AI a “new kind of camera,” offering a fresh “opportunity to reimagine what stories are like.”
As Trillo explained, “I’m less interested in using AI to make things I can shoot with a camera than creating imagery I couldn’t create before.”
This year, the value of generative AI in the media and entertainment space is expected to reach $1.7 billion, according to a new report. The research also projects a 10-year compounded annual growth rate of 26.3%, reaching $11.5 billion in 2032.
While M&E is among the first industries to be impacted from top to bottom by GenAI, there are likely to be strong legal challenges putting the brakes on its adoption this year.
The use of GenAI for text-to-image generation is projected to reach $2.6 billion as the tech “democratizes content creation and fosters collaboration between AI and human artists, making content creation more accessible,” the report argues.
Other segments, such as video generation, 3D modeling and animation, games, film and television, advertising and marketing, music and sound production, and VR/AR, are also poised for “substantial” growth.
The value of GenAI in games, for example, is projected to surge from $477.7 million to just shy of $5 billion by 2032. In film and television, the tech is expected to top $2 billion in value by the same period million by 2032. AI is playing a pivotal role in scripting, character animation, and content personalization, as well as AI-driven content recommendation algorithms.
The report values generative AI’s use in advertising and marketing to grow to nearly $2.5 billion in a decade as AI increasingly automates content creation for marketing campaigns, offering personalization and efficiency.
In music and sound production, the value of GenAI is set to expand to $1.34 billion by 2032; AI-driven music composition, sound effects, and mastering tools “are streamlining creative processes in the music industry.”
In all of this, the use of cloud (data centers) holds a significant position. As of 2023, this segment accounted for the largest revenue share of 52.7% in the global generative AI market for media and entertainment and is projected to grow at a rate of 26.5% for the foreseeable future.
Cloud deployment offers the extensive computing resources necessary for complex AI algorithms, leading to faster processing and rendering times for high-quality real-time content creation. It also provides scalability and facilitates global team collaboration, allowing for easy sharing, editing, and updating of AI models among teams.
Regional Analysis
In terms of regional analysis, North America stands out as the dominant player in for generative AI in the M&E market. Per the report, in 2022, North America accounted for 40.6% of revenue; Europe followed with a 22.2% market share in 2023.
The report identified the Asia Pacific “experiencing robust growth” with a projected market value of approx. $37.45 billion by 2032.
“The Boy and the Heron:” Studio Ghibli Storytelling Goes in a New (Digital) Direction
TL;DR
The darkness and shadows in the latest Studio Ghibli animation are a departure for director Hayao Miyazaki and give a glimpse into the 82-year old’s personal story.
Long time collaborators talk about working with the legendary anime creator and say that he is open to using digital technology in his animation process.
His eye sight may be fading but rumors of this being his last work may be premature
The legend of Japanese animator and manga artist Hayao Miyazaki grows greater with each year, not least because the 82-year-old creator of 2001 Oscar winner Spirited Away is shy about giving interviews.
At the release of his most personal work to date, The Boy and the Heron, some of Miyazaki’s longest serving collaborators consider both the man and his work.
Atsushi Okui, for instance, has been the director of photography for films at Miyazaki’s Studio Ghibli since 1992.
“I’ve worked on a lot of films with Miyazaki, and each time the most important job is creating something that matches what’s inside of his head,” Okui told Gemma Gracewood at Letterboxd.
“So I do what is called the ‘finishing work;’ by the time the material has come to me it already has the imagination of the artists and animators. I have to work out how to bring that all together.
“Whether we can recreate the images inside of Miyazaki’s head, or even if they’re different, as long as we can surpass his expectations then that’s okay. That’s what we’re aiming for.”
Working for so long for Miyazaki does come with the advantage of beginning to guess his mind.
Okui told Eli Friedberg at The Film Stage: “Because his storyboarding is so articulate, so detailed and meticulous, that — adding to the fact that I’ve been working with him for 30 years — I find it quite easy to tell what he’s aiming at just by reading his storyboards.
“I wouldn’t say I’m always 100% correct in answering to whatever he writes, but it’s not often that he comes back to me with any feedback other than ‘Okay.’ It’s usually quite a smooth process, in that respect.”
The Boy and the Heron draws from Miyazaki’s childhood, a source of inspiration he initially resisted while creating masterful anime like Ponyo and My Neighbor Totoro. It follows the story of a young boy named Mahito who recently lost his mother. Along with a cunning and deceptive grey heron, he journeys to a mysterious world outside of time where the dead and the living coexist.
To emulate the tenebrous aspects of the story, Okui suggested that they should darken the colors of the animation as well.
“With Studio Ghibli pictures, all of the backgrounds are hand drawn with poster color paints, and then we turn that into data,” he explains to Ryan Fleming of Deadline. “When we turn the handwritten art into data, we have the base be the black background that was painted.
“However, we never attempted to make that any darker in digital or any darker in data except for this one. That was the first time we took upon the challenge of dropping the black even blacker, because unless we did that, we felt that we wouldn’t be able to bring forth the darkness that the main boy in the film harbors.
“So that was kind of a departure from the other films that we had done up until then.”
The muted color palette at the beginning of the movie “matches and reflects Mahito’s interior and his repressed feelings,” according to the DP, as reported by Variety’s Jazz Tangcay.
The crew balanced the darkness of change and war — always implied, never seen — with a fantastical world filled with vibrant creatures and characters. The explosion of color “was intentional,” said Okui.
Okui has also acted as Studio Ghibli’s head of digital imaging for decades. Over time, he has encouraged the renowned animation house to adopt digital animation tools for a more immersive big-screen experience, bringing the CG team fully into the room for production meetings that had been long reserved for artists.
Incorporating Modern Tech
Despite this Miyazaki has something of a reputation for being distrustful of digital animation and computer technology. Okui disputes this.
“I do think the image of Mr. Miyazaki’s [technophobia] might be a little exaggerated,” he tells Friedberg. “But his concerns do make some sense: Mr. Miyazaki is an animator, so whatever he can do manually he wants to do. Where we draw the line at Studio Ghibli is with certain things that you can’t do with hand-drawn skills.”
For camerawork capturing background scenery, for example, sometimes it’s easier to employ digital techniques, he adds. “In these parts, if it’s easier to do it digitally than hand-drawn, Mr. Miyazaki won’t have any problem conceding it — indeed, we’ve employed digital technology in The Boy and the Heron as well.”
A Very Personal Film
The new film draws both from Miyazaki’s own childhood memories of being evacuated from bombed-out cities and his tuberculosis-stricken mother often away in care, as well as from Genzaburo Yoshino’s 1937 novel, ‘How Do You Live’?
“This is a film filled with a lot of Miyazaki’s own personal ideas,” Okui told Letterboxd. “Until now it was all about capturing the liveliness and freedom that came with the characters, whereas with this film it’s more about expressing their innermost thoughts.”
Producer Toshio Suzuki is long-time colleague of Miyazaki, as well as a co-founder and the former president of Studio Ghibli. He relates to Carlos Aguilar of the New York Times that, growing up, Miyazaki had trouble communicating with people and expressed himself instead by drawing pictures.
“I noticed that with this film, where he portrayed himself as a protagonist, he included a lot of humorous moments in order to cover up that the boy, based on himself, is very sensitive and pessimistic,” Suzuki said. “That was interesting to see.”
If Miyazaki is the boy, Suzuki added, then he himself is the heron, a mischievous flying entity in the story that pushes the young hero to keep going.
In contrast, Hisaishi, the composer who first worked with Miyazaki on the 1984 feature Nausicaa of the Valley of the Wind, has a strictly professional relationship with him.
“We don’t see each other in private,” Hisaishi told the paper. “We don’t eat together. We don’t drink together. We only meet to discuss things for work.”
That emotional distance, he added, is what has made their partnership over 11 films so creatively fruitful.
“People think that if you really know a person’s full character then you can have a good working relationship, but that doesn’t necessarily hold true,” Hisaishi said.
“What is most important to me is to compose music. The most important thing in life to Mr. Miyazaki is to draw pictures. We are both focused on those most important things in our lives.”
Miyazaki often declares that “this is his last movie” whenever he’s made a new film, but there’s hope for fans yet.
“At first, I could sense that he wanted this to be his final project,” veteran animator Takeshi Honda, who worked as The Boy and the Heron’s animation director, tells BBC Culture. “But I could sense time and again that he’s not finished, that there are other things that he wants to do.”
Speaking through a translator, Honda cites Miyazaki’s penchant for suggesting stories to adapt. “Sometimes he would just come to me and say, ‘Listen, this novel is really interesting, you should read it!’ and I was like ‘What is this all about? What is he trying to make me do?’ Moments like that made me doubt his intention to retire.”
Studio Ghibli Vice President Junichi Nishioka is even more forthright, telling Gracewood. “He’s still coming into the office every day and thinking of ideas for his next film.”
And adding to BBC Culture, “I don’t think he’s ever going to really let go. He will have a pencil in his hand until the very day that he dies.”
“People are reluctant to imagine what could be the future in 10 years, because no one wants to look foolish,” Alison Smith, head of generative AI at tech consultancy Booz Allen Hamilton. “But I think it’s going to be something wildly beyond our expectations.”
She was talking to the MIT Technology Review, where Will Douglas Heaven provides a status report and a 2024 prediction for GenAI’s evolution by way of asking six pertinent questions.
Read the highlights of these six as-yet unresolved questions below:
1. Will we ever mitigate the bias problem?
Bias has become a byword for AI-related harms, for good reason. Real-world data, especially text and images scraped from the internet, is riddled with it, from gender stereotypes to racial discrimination. Models trained on that data encode those biases and then reinforce them wherever they are used.
AI tools developers like Stability AI and OpenAI are trying to fix the problem in newer versions of their models. Critics say this won’t solve deep seated issues with source data.
MIT predicts that bias will continue to be an inherent feature of most generative AI models, but workarounds and rising awareness could help policymakers address the most obvious examples.
2. How will AI change the way we apply copyright?
There are dozens of class action lawsuits against OpenAI, Microsoft, and others, claiming copyright infringement. Getty, for example, is suing Stability AI, the firm behind the image maker Stable Diffusion. Celebrity claimants such as Sarah Silverman and George R.R. Martin have drawn media attention.
But don’t hold your breath. It will be years before the courts make their final decisions, says Katie Gardner, a partner specializing in intellectual-property licensing at Gunderson Dettmer.
By that point, she says, “the technology will be so entrenched in the economy that it’s not going to be undone.”
In the meantime, the tech industry is building on these alleged infringements at breakneck pace. “I don’t expect companies will wait and see,” says Gardner. “There may be some legal risks, but there are so many other risks with not keeping up.”
Some companies have taken steps to limit the possibility of infringement. Google, Microsoft and OpenAI now offer to protect users of their models from potential legal action.
“We’ll take that burden on so the users of our products don’t have to worry about it,” Microsoft CEO Satya Nadella told the MITTechnology Review.
Also, new kinds of licensing deals are popping up. For example, Shutterstock has signed a six-year deal with OpenAI for the use of its images.
Douglas Heaven says that high-profile lawsuits won’t stop companies from building on generative models. “New marketplaces will spring up around ethical data sets, and a cat-and-mouse game between companies and creators will develop.”
3. How will it change our jobs?
The fear of AI taking our jobs still seems a little in the distance — but that didn’t stop the writers from striking for safeguards from their employers this year.
Many researchers deny that the performance of large language models is evidence of true intelligence and that there’s a lot more to most professional roles than the tasks those models can currently do.
Yet many businesses are already using large language models for research. Handing over grunt work to machines lets people focus on more fulfilling parts of their jobs. The tech also seems to level out skills across a workforce: early studies suggest that less experienced people get a bigger boost from using AI.
“But change is always painful, and net gains can hide individual losses. Technological upheaval also tends to concentrate wealth and power, fueling inequality,” says Douglas Heaven.
Fears of mass job losses will prove exaggerated, says the MIT Technology Review, but generative tools will continue to proliferate in the workplace. Roles may change and new skills may need to be learned.
4. What misinformation will it make possible?
The Biden administration made labeling and detection of AI-generated content a focus of its executive order on AI in October. But the order fell short of legally requiring tool makers to label text or images as the creations of an AI.
The European Union’s AI Act, agreed upon in December, goes further. Part of the sweeping legislation requires companies to watermark AI-generated text, images, or video, and to make it clear to people when they are interacting with a chatbot. And the AI Act has teeth: the rules will be binding and come with steep fines for noncompliance.
The US has also said it will audit any AI that might pose threats to national security, including election interference.
But here’s the catch: it’s impossible to know all the ways a technology will be misused until it happens.
The MIT Technology Review predicts: “New forms of misuse will continue to surface as use ramps up. There will be a few standout examples, possibly involving electoral manipulation.”
5. Will we come to grips with its costs?
The development costs of GenAI, both human and environmental, are also to be reckoned with. According to the MIT Technology Review, the “invisible-worker” problem is an open secret: “We are spared the worst of whatgenerative models can produce thanks in part to crowds of hidden (often poorly paid) laborers who tag training data and weed out toxic, sometimes traumatic, output during testing. These are the sweatshops of the data age.”
It’s to be hoped that as the human costs come into sharper focus, developers will be pressured to address the issue.
The other major cost, the amount of energy required to train large generative models, is set to climb before the situation gets better. In August, NVIDIA announced Q2 2024 earnings of more than $13.5 billion, twice as much as the same period the year before. The bulk of that revenue ($10.3 billion) comes from data centers — in other words, other firms using NVIDIA’s hardware to train AI models.
“The demand is pretty extraordinary,” says NVIDIA CEO Jensen Huang. He acknowledges the energy problem and predicts that the boom could even drive a change in the type of computing hardware deployed. “The vast majority of the world’s computing infrastructure will have to be energy efficient,” he says.
But don’t expect significant improvement on either front soon, says the MIT Technology Review.
6. Will doomerism continue to dominate policymaking?
Doomerism — the fear that the creation of smart machines could have disastrous, even apocalyptic consequences — has long been an undercurrent in AI.
OpenAI CEO Sam Altman, among others, have suggested that AI systems should have safeguards similar to those used for nuclear weapons.
Others call this fearmongering, at risk of stifling innovation. The debate also channels resources and researchers away from more immediate risks, such as bias, job upheavals, and misinformation.
“Some people push existential risk because they think it will benefit their own company,” François Chollet, an AI researcher at Google, tells the MIT Technology Review. “Talking about existential risk both highlights how ethically aware and responsible you are and distracts from more realistic and pressing issues.”
The MIT Technology Review predicts that the fearmongering will die down, but the influence on policymakers’ agendas may be felt for some time.
Yes, as Douglas Heaven also notes, some of the same people ringing the alarm are also raising millions of dollars in investment and making heaps of money for themselves.
There’s a debate about the future of AI, between “doomers” who only see death and “accelerationists” who preach AI utopianism.
January 16, 2024
Posted
January 14, 2024
Past and Present Intersect in Steve McQueen’s “Occupied City”
TL;DR
Academy Award-winning director Steve McQueen’s film “Occupied City” is a four-hour documentary that provides two simultaneous portraits of Amsterdam — one, current and the other a record of atrocities during the Nazi occupation of the Netherlands in the early 1940s.
“Occupied City” is the second recent feature film, following “The Zone of Interest,” to address the Holocaust without resorting to overused archival imagery.
McQueen based “Occupied City” on a book written by his wife, the Dutch historian and filmmaker Bianca Stigter. They describe it as a collision of the ghosts of Amsterdam’s past and the reality of the city’s present.
Occupied City is the second recent feature film, following TheZone of Interest, to address the Holocaust without resorting to overused imagery. This four-hour feature documentary by British director Steve McQueen concerns the Nazi Occupation of Amsterdam during World War II but doesn’t use archive footage, talking heads, or dramatize any scenes.
The film is based on Atlas of an Occupied City: Amsterdam 1940-1945, a historical encyclopedia written by McQueen’s wife, the Dutch historian and filmmaker Bianca Stigter.
“Bianca had written this extraordinary book, and it’s all her research over the last 20 years or more,” the director explained to A.frame. “It’s not the first book you’d ever think we’d translate into a movie. It’s not an obvious choice.”
Using the text of Atlas as narration, McQueen (who won Best Picture with 2013’s 12 Years a Slave) juxtaposes the history of the city and explanatory narration by Melanie Hyams with footage of life in Amsterdam today, which he shot over the course of several years beginning in 2019 and through the pandemic lockdowns.
“What I wanted was, as you would do in a city, you get lost,” McQueen told IndieWire’s Filmmaker Toolkit podcast, adding that the film was a bit like an English garden. “Unlike a French garden, which is all about the avenues; it’s very symmetrical, very formal. An English garden [has] more to do with wandering and the contemplating and lots of ideas come from those places of wandering and pondering.”
Stigter describes the film is more of a free wandering through the city, and the book is more practically set up like a guide book.
One scene in which the elderly owner of an apartment in which Occupied City filmed showed the crew country line-dancing. Under Hyams’ narration of what happened there during the war, the joyful dancing of the owner adds the fact that she, also, might have her own story of the Nazi occupation.
“There’s something excessive about the movie because — besides from what you see, you also think, ‘What do these people [we’re seeing] have in their heads [from that time]?’” Stigter told IndieWire.
McQueen, who lives in Amsterdam with his wife, found the experience of living in a city that had once been Nazi occupied an unsettling one.
“My daughter’s school was once an interrogation center. Where my son went to school was a Jewish school, so these things were in my every day,” he told A.frame. “When it’s sinking into your pores, you start thinking about it. Coming from London, not having grown up in an occupied city but being here now, it felt like I was living with ghosts. It’s almost like an archaeological dig. This is recent history within the last 85 or 90 years, and I thought this could be fascinating. It is two existences: My presence and another presence.”
Initially, McQueen thought he’d find some archive footage from Amsterdam in WWII to project on top of the present day footage, but then decided to use narration based on Stigter’s text and to merge the two things together.
“There’s optimism in [Hyams’] voice, even though there was a dispassionate sort of description of what was going on,” he told NPR’s Asma Khalid. “And that was because I didn’t want to manipulate the audience. It was about the audience bringing the information, receiving the information for the first time.”
He described the process of shooting on 35mm — his favored medium — as a ritual. “It’s so precious this footage and it actually adds to the tension of being careful about how you how you approach the moment,” he shared with the audience at the New York Film Festival.
“It was shooting without a tightrope, in a way,” he added to A.frame. “Young people today shoot digitally; they spray the whole area, shooting for 60 hours and cutting it down to half an hour. You can’t do that with film. The process of making a film and working with Lennert Hillege, the DP, the sound people, and others, it was a beautiful ritual every time we took the camera. I think that was extremely helpful in capturing things, because everyone was very focused.”
Addressing the length of the film, McQueen said it couldn’t be told in an hour and a half. “It needed that contemplation, needed meditations to sort of get into the psyche of the cinema experience, and that time was very important for us,” he told NPR.
Speaking again to A.frame, Stigter said, “It’s essential to have ways to bring history to the fore. We have documentaries, books, and feature films, and this is trying to tell you things about the past in a different way. That’s also why the length is important. It turns it more into a meditation or an experience than a history lesson.”
McQueen, who began his career making video installation art, is also preparing a “36-hour sculptural version” as an art piece. “There are 36 hours of edited footage,” he informed A.frame. “From that 36 hours of edited footage, we took out these four hours, because making a feature film is a very different experience than making the sculptural element of it. Certain things are repeated in that, but you don’t want to do that in a feature film. In some ways, after a particular moment, it condenses itself, and then you decide what you want to keep in and what you want to take out to make it a certain kind of journey.”
Occupied City ends with a bar mitzvah ceremony because it was important to McQueen and Stigter to show the persistence of Jewish life in Amsterdam.
Speaking at the New York Film Festival, Stigter said, “For me the last scene is also very important to show something of contemporary Jewish life in the city, and that was a very beautiful and hopeful conclusion for the for the movie.
“I often think watching a movie is like a religious experience,” McQueen added to A.frame. “You’re trying to create meaning in what you see. In this case, the more you know, the less you know.”
He continued this theme with NPR, saying, “When you go to the movies, people try to connect the dots and try to make sense of things. But the lessons learned from this situation is that nothing makes sense. How can you even fathom or sort of get to an understanding of how, for example during this war, six million people died. Try and make sense of that.”
Director Jonathan Glazer pursued an immersive naturalism in “The Zone of Interest” by removing the artifice and conventions of filmmaking.
January 11, 2024
Navigating the Creator Economy: There Are Many (Many) Business Models
TL;DR
New research shows that advertising alongside creator content speeds up the consumer purchase journey, collapsing traditional stages like awareness, interest, and consideration.
According to a report from digital advertising trade group IAB, 92% of advertisers view creator-led content as a high-quality channel, with many planning to increase investment in this area.
Jack Koch, SVP of Research & Insights at IAB, emphasizes the unique appeal and integration of ads in creator content compared to studio-produced content, affecting consumer behavior and advertising effectiveness.
Koch outlines strategies for blending creator content into traditional advertising, focusing on leveraging creator-driven platforms, clear tracking mechanisms, and brand safety controls.
A balanced mix of studio and creator content is essential for engaging diverse viewers, he says, with emphasis on data analysis, audience understanding, and quality content alignment.
The ever-evolving advertising landscape finds a fresh ally in the creator economy, offering brands a direct line to more genuine, engaging connections with their audience. Digital advertising trade group IAB has released its first-ever research report on advertising opportunities within the creator economy. The report, titled “The Creator Economy Opportunity: Where Authenticity Meets Impact,” was conducted in partnership with market research agency Talk Shoppe, and offers a deep examination of the evolving consumer purchase journey, how buyers can leverage the creator economy in their media budgets, and where creator content fits in the media mix alongside studio-produced content.
“The creator economy, valued at $250 billion this year by Goldman Sachs, is expected to nearly double to $480 billion by 2027. Savvy marketers know that they need to reach their customers in content that resonates with them,” IAB CEO David Cohen said in a statement. “There is no doubt creator content is now a vital part of the mix.”
After tracking more than 1,000 consumer purchase journeys, the report found that advertising alongside creator content can accelerate the purchase funnel — collapsing the awareness, interest, and consideration stages. IAB also reported that creator content sparks more action when compared to studio-produced content: after watching, consumers were significantly more likely to search for additional content about the topic and interact with the content by liking, commenting, or subscribing.
Advertisers themselves are feeling bullish about creator-led content, which 92% of respondents said they consider to be a high quality channel. Around 44% of advertisers plan to increase their investment in creator content in 2024, the report found, with an average increase of 25%. Perhaps most important, advertisers don’t have to reinvent the wheel to measure the success of their campaigns. Nearly 90% of advertisers use the same KPI metrics across both creator content and studio content, and 86% of advertisers have confidence in the ability to measure the effectiveness of creator content campaigns.
Influencers Gotta Influence
Jack Koch, SVP of Research & Insights at IAB, elaborated to NAB Amplify on how creator content influences each stage of the consumer purchase journey.
“Creator content marketing significantly influences the consumer’s purchase journey, from discovering new brands and products to speeding up buying decisions and driving post-purchase loyalty and advocacy,” Koch said. “Creator content is exceptionally powerful in the research/consideration and loyalty/advocacy stages and works in tandem with studio content to maximize influence on the consumer’s path to purchase.”
The perceived authenticity of creator-led content, he added, “allows consumers to visualize products in their daily lives, enabling multiple stages to happen simultaneously. This compresses the purchase funnel and fosters a more confident and rapid buyer decision-making process.”
Creator Content Is Premium Content
The evolving world of digital advertising isn’t just about where ads are placed; it’s equally about how they are perceived. Koch offers insights into how consumer perceptions vary markedly between creator and studio-produced content.
“Consumer perceptions of advertising in creator versus studio content showcase distinct viewpoints,” he explains. This difference is fundamental to understanding the shift in advertising dynamics. Creator content, as per Koch’s analysis, is perceived as “a more engaging and relevant space for ads, offering unexpected inspiration and precise audience targeting.” This is in contrast to studio content, which, while celebrated for its high production value and broad reach, occupies a different niche in the advertising ecosystem.
The distinction becomes more pronounced when considering how ads are integrated into the content. “Consumers perceive ads within creator content as more seamlessly integrated into their viewing experience,” Koch notes.
In contrast, studio content, with its traditional approach, brings a different kind of advantage. “Ads in studio content are deemed slightly more memorable due to familiarity with advertising in that space,” he says. This familiarity stems from years of exposure to traditional advertising formats, making studio content ads more recognizable and, in some cases, more memorable to consumers.
In essence, the authenticity of creator content and the established familiarity of studio content offer two sides of the same coin. Advertisers, by understanding and respecting these differences, can develop more nuanced and effective advertising campaigns that speak directly to the evolving preferences of their audience.
Challenges and Strategies for Embracing Creator-Driven Content
Incorporating creator content into traditional advertising strategies is not without its challenges. Koch sheds light on the complexities and the potential strategies to navigate these waters effectively.
The primary hurdles in merging creator content with traditional advertising stem from three areas: “uncertainty, unpredictability, and initial navigation hurdles,” Koch says.
“Advertisers who might choose not to spend with creators struggle with the uncertainty of ad placement within creator content, hindering alignment with brand values and control over ad environments,” he points out.
This uncertainty can lead to concerns about whether the content aligns with the brand’s values and whether it can be controlled effectively within the advertising environment. “Further, the less predictable performance of creator content complicates ROI tracking and campaign evaluation. Navigating the expansive creator space overwhelms advertisers due to its diverse options.”
To address these challenges, Koch recommends a strategic approach centered on leveraging creator-driven platforms. These platforms offer more control over ad placement and provide advanced analytics tools for real-time performance tracking. Such tools can mitigate the unpredictability factor by offering clearer insights into campaign effectiveness.
Koch also emphasizes the importance of clear tracking mechanisms, like unique discount codes, especially when partnering directly with creators. These mechanisms enable advertisers to assess the performance of their campaigns more accurately and make data-driven decisions.
Furthermore, embracing brand safety controls is crucial. Koch suggests utilizing technological advancements such as AI content monitoring and participating in industry initiatives aimed at creating safer digital environments. By integrating these creator-driven platforms and strategies, he says, advertisers can harmonize traditional and digital media to effectively achieve their campaign objectives.
The Winds of Change
Valued at $250 billion and projected to reach $480 billion by 2027, the Creator Economy is driving a major shift in digital video viewership.
“While traditional studio content remains vital, creator content has become indispensable in advertising strategies, as 39% of consumers are watching more creator content than they were a year ago, compared to 22% watching more studio content across devices and services,” Koch says. “In fact, according to our study, 89% of advertisers feel positive about creator content advertising, and 92% consider it a high-quality channel.”
Embracing creator-driven content isn’t just a trend; it’s an essential strategy in a landscape where authenticity and multipoint audience engagement reign supreme. “By building trust with partners and creators, aligning budgets with areas of highest consumer impact, and focusing on aligning with quality content, marketers are finding long-term success integrating creator-driven advertising into their holistic media strategies.”
Maintaining Brand Integrity
To effectively integrate creator content while upholding brand integrity and meeting marketing goals, brands should carefully select creators whose content aligns with their values and resonates with the target audience, Koch says. Further, “encouraging an authentic integration of the brand within creator content is vital to maintaining authenticity and trust.”
Adaptability and flexibility are key, he says, allowing brands to refine strategies based on performance feedback. “Diverse partnerships with a range of creators help widen brand appeal across different consumer segments,” he advises.
“Robust measurement tools and unique tracking methods ensure accurate assessment of creator content impact, and leveraging creator-driven platforms for better ad control and embracing brand safety tools are crucial for maintaining brand integrity.”
Finally, Koch advises, brands need to build meaningful relationships with creators. “Fostering long-term relationships with creators and using insights from their content for continuous optimization enables brands to create a cohesive presence across traditional and digital media.”
Winning Strategies for the Ideal Media Mix
“A balanced media mix is crucial,” Koch exhorts. “Studio content can spark conversations and immerse viewers, while creator content captivates with its personally relevant nature that inspires and engages. Both content types meet the diverse needs of viewers, establishing that ‘quality’ content is subjective.”
Advertisers, he says, “should analyze data from both types of content to understand their audience demographics, engagement levels, and performance metrics. Testing and iterating the mix while aligning it with campaign objectives will yield the ideal balance for maximum impact.”
The formula, he says, is simple: “By building trust with partners and creators, aligning budgets with areas of highest consumer impact, and focusing on aligning with quality content, marketers are finding long-term success integrating creator-driven advertising into their holistic media strategies.”
Driven by creator-led content, social commerce stands out as one of the most significant trends for marketers to watch out for in 2024.
January 10, 2024
AI Is Here — and Everywhere: Researchers Look at the Challenges for 2024
2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.
We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.
Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder
2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.
One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so — in ways that often do more harm than good.
However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations — and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.
So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.
I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.
Kentaro Toyama, Professor of Community Information, University of Michigan
In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence — not quite here yet — it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.
Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.
The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning — what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcussuggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.
Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire — comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.
Speaking of problems, the very people sounding the loudest alarms about AI — like Elon Musk and Sam Altman — can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 — though it seems slow in coming — is stronger AI regulation, at national and international levels.
Anjana Susarla, Professor of Information Systems, Michigan State University
In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.
The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.
The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.
A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.
The breakthrough year of gen AI is being likened to the appearance of the first web browsers by a senior scientist at IBM, but just like the internet, there will only be a few companies or people to whom power and wealth will accrue.
Darío Gil, SVP and director of Research at IBM, said that at the birth of the Web browser “people imagined experiencing the internet. All of a sudden there was this network of machines, and content can be distributed, and everybody can self-publish.”
Though perhaps this should be termed “tipping point,” a phrase popularized by Gil’s on stage interlocuter, the psychologist and culture commentator Malcolm Gladwell.
“Previously, AI systems were behind the scenes, [computing] your search results, or translation systems,” Gil said. “Now people can touch it. Fundamentally, at this moment is where that the number of people that can build and use AI has skyrocketed.”
Gladwell asked the computer scientist to place AI on a list of the biggest technological achievements of the last 75 years.
Gil said he would put it first.
“Since World War II, undoubtedly, computing has reshaped our world. And within computing, I would say, the role that semiconductors have had has been incredibly defining. I would say AI is the second example of that as a core architecture, that is going to have an equivalent level of impact.“
The third leg will be Quantum information, he said. “I like to summarize that the future of computing is bits, neurons, and qubits. It’s that idea of high-precision computation — the world of neural networks and AI and quantum. The combination of those things is going to be the defining force of the next 100 years in computing.”
Leverage AI — and Add Value
However, the distribution of power when it comes to AI will not be equal, Gil predicted.
“It likely be true that the use of AI will be highly democratized, meaning the number of people that have access to its power to make improvements in terms of efficiency and so on will be fairly universal, and also that the ones who are able to create AI may be quite concentrated.
Gil elaborated, “If you look at it from the lens of who creates wealth and value over sustained periods of time then just being a user of AI technology is an insufficient strategy.
“And the reason for that is you will get the immediate productivity boost of API which will be a new baseline for everybody. But you’re not accruing value in terms of representing your data inside the AI in a way that gives you a sustainable competitive advantage.
“So what I always try to tell people is, don’t just be an AI user; be an AI value creator.”
As a business, he warned, “It would be kind of a mistake to develop strategies that are just about usage. There will be lot of consequences in terms of the haves and have-nots that will apply to institutions and regions and countries.”
At the same time, predictions about how and where and to what extent AI will impact are tricky because of the way that AI draws its information from across systems and networks.
“In this very high-dimensional representation of information that is present in neural networks, we may find amazing adjacencies or connections of themes and topics in ways that the individual practitioners cannot describe.”
We are going to suffer from not knowing the root cause of something impacted by AI, he argued.
“One of the unsatisfying aspects [of AI] is that it may give you answers but not give you good reasons for where the answers came from.”
Gladwell drew from this implications for how educational and medical institutions would have to change their learning methodology. He said he would encourage institutions not to teach how to use individual AI tools but to embed AI across the curriculum.
“Understanding how we do problem solving in the age of data and data representation means it needs to be embedded in the curriculum of everybody and certainly in the context of medicine and scientists.”
Students entering college are going to know more about AI than the academics do, he said. “That alone is a Herculean people problem.”
What Hollywood Studios Don’t Understand
Institutions of all sorts will have to be at the forefront of integration in order to unlock the full power of AI thoughtfully and responsibly, Gladwell said. “Even Hollywood is being forced to figure this out.”
The writer popularized the idea that if you practice at something for 10,000 hours you will achieve world-class expertise in any skill. But with AI on tap to automate and radically speedup the process to achieving goals where does this leave the creative process?
It should make studios concerned for their future not so much the writers and actors, Gladwell said.
“In the strikes, the frightened ones were the writers and not the studios. Wasn’t that backwards? It should be the studios who are worried about the economic impact of AI. Doesn’t AI, in the long run, put the studios out of business long before it puts the writers out of business?
“I only need the studio because the costs of production are as high as the sky and the costs of production are overwhelming.
“Whereas if I have a tool which introduces massive technological efficiencies to the production of movies, then why do I need a studio?”
The MIT Technology Review’s Will Douglas Heaven poses key questions to bear in mind as the generative AI revolution unfolds in 2024.
January 9, 2024
What AI Can Do for the Creator Economy
TL;DR
AI is well on its way to transforming the creator economy. Influencers and brands will experiment with the technology, or avoid it to their peril. Exactly how is still up for debate.
Some things are already clear: Repetitive tasks can and will be automated. Algorithms will be refined by ML.
Concurrently, regulatory reforms in the EU are coming home to roost, further complicating things for creators and brands looking to leverage AI.
Since DALL-E, Midjourney and ChatGPT’s 2022 debuts, experts and laypeople alike have pondered how AI will transform and disrupt industries. “Inside the Creator Economy” editor Jim Louderback predicts AI will generate more opportunities in the creator economy than jobs it make obsolete.
“I really see the potential for top creators, the ones who are famous, to use AI to allow them to do more, be more creative, become superheroes. And for these mid-level and micro emerging creators, it’ll allow them to level up enough that they can build a career out of it,” Louderback told Venture Beat’s Dean Takahashi.
But not all the AI benefits will be on the creator side; brands stand to benefit, too. For example, Louderback thinks AI can be leveraged to quickly vet potential influencers for “brand suitability” and even analyze submitted video content to check if it’s a good fit. This will especially be a boon for smaller creators.
AI is already enabling some innovative monetization by adding in product placement in post, among other use cases. “We are seeing new ways for advertising to be unobtrusive and get in there and make money for creators,” Louderback says.
But will VTubers kill the TikTok star? Louderback thinks that’s unlikely. However, he says, “We’re going to continue to see experiments on the AI creators and influencers.”
He compares fully simulated influencers to puppets, saying, “Puppets are great, and they can give you the illusion of a connection, which is fine. That can even enable a creator to replicate themselves and interact with a lot of people in Cameos or on Only Fans and have a relationship with them.”
But, Louderback says, “I don’t think you’re going to be taking work away from creators. I think what you’ll be doing is you’re going to be providing new places for creators to be able to make more money and duplicate themselves and extend what they can do with less people.”
There will be a time and a place for AI-generated influencers, however. He predicts these “virtual creators” will be like the Duolingo Mascot, “it’s an extension of something that you already see versus something that is brand new and is going to take over and put creators out of business.”
Louderback also agrees that AI will “create a big improvement in productivity in general.” That productivity spike should bolster the economy and help to offset the effects of AI-driven job displacement, he thinks.
Alternatively, Louderback wonders if we’re nearing a point of instituting a universal basic income (for creators, at least) that would keep the content flowing and keep the human creaives solvent.
“That sounds kind of like rosy, like unicorns and rainbows, but who knows?” Louderback says.
Louderback is far from alone in his assessment. Adrian Pennington writes, “[T]he industry is betting on AI not to replace creators, but to increase productivity and bring more opportunities for people to make content.”
A recent creator survey from Business Insider found that about half of creators are eager to use AI at work. Another survey revealed ChatGPT as the most popular AI tool for creators, with almost 9 out of ten respondents saying they use it, followed by 31% who said they use Midjourney. Respondents commonly said that they use AI to speed up content creation.
In 2024, creatives “can have an edge over AI-generated content by developing our unique view of the world, but we shouldn’t shun AI as a tool to support us in bringing ideas to life,” author and Zealous CEO Guy Armitage tells Forbes.
In the same article, Streamable Product Director Geoffrey Johnston points out that “AI enables augmentation — not just automation. Automation is a big benefit of AI, but only if the quality is there.”
The Deloitte trio caution that AI’s accelerated content creation will likely put increased stress on creators. They also suggest that marketing teams may need to collaborate more closely with technology and/or product teams working on generative AI content. Additionally, companies need to consider how AI will change ways of working, and adapt organizationally. Finally, pay attention to how personalized content changes consumer behavior.
Also worth considering: The European Union’s Digital Services Act comes into play, as of this month, requiring social media companies to offer a way to opt out of AI-curated feeds.
“While the act is currently only applicable to EU member countries, creators everywhere should consider the effects as global tech companies may choose to follow the GDPR rollout of applying these changes to every market in an effort to streamline compliance efforts,” Zen Media CEO Shama Hyder writes for Fast Company.
Eventually, “fewer users will be able to stumble upon content as easily as they do now, which could potentially reduce exposure and audience size.”
Elena Piech, creative producer at ZeroSpace, shows how AI can allow you to “do more of the creative decision making.”
January 7, 2024
Can Content Credentials Can Defeat Deepfakes (in Elections and Beyond)?
TL;DR
In 2024, deployment of content credentials will begin in earnest, spurred by new AI regulations in the EU and the United States.
For the media companies, content credentials are a way to build trust at a time when rampant disinformation makes it easy for people to cry “fake” about anything they disagree with.
The BBC and other big media organizations are making a push to use a content credentials system to allow Internet users to check the validity and provenance of images and videos.
In 2024, major national elections in some of the world’s biggest democracies, including India, the US and the UK, could be shaped by the alarmingly real threat of online disinformation.
To counter the threat of deepfake content, media companies are making moves to embed news-related video and still images with tags that display their provenance.
So-called data integrity, data dignity or digital provenance has been proposed for a while as the most effective means of combatting AI-manipulated disinformation published online.
Now, major news organizations, tech companies and social media networks appear on track to make concrete steps that would give audiences transparency about the video and stills they are viewing.
“Having your content be a beacon shining through the murk is really important,” Laura Ellis, head of technology forecasting at the BBC, told IEEE Spectrum, which has a thorough report on the latest developments.
The BBC is a member of the Coalition for Content Provenance and Authenticity (C2PA), an organization developing technical methods to document the origin and history of digital media files, both real and fake.
The C2PA group brings together the Adobe-led Content Authenticity Initiative and a media provenance effort called Project Origin, which released its initial standards for attaching cryptographically secure metadata to image and video files in 2021.
Since releasing the standards, the group has been further developing the open-source specifications and implementing them with leading media companies — the Canadian Broadcasting Corp. (CBC), and The New York Times are also C2PA members. Andrew Jenks, director of media provenance projects at Microsoft, is C2PA chair.
As detailed by the IEEE, images that have been authenticated by the C2PA system can include a little “cr” icon in the corner; users can click on it to see whatever information is available for that image — when and how the image was created, who first published it, what tools they used to alter it, how it was altered, and so on. However, viewers will see that information only if they’re using a social media platform or application that can read and display content-credential data.
The same system can be used by AI companies that make image- and video-generating tools; in that case, the synthetic media that’s been created would be labeled as such.
Adobe, for example, generates the relevant metadata for every image that’s created with its image-generating tool Firefly. Microsoft does the same with its Bing Image Creator.
While only a few companies are integrating content credentials so far, regulations are currently being crafted that will encourage the practice. The European Union’s AI Act, now being finalized, requires that synthetic content be labeled. And the Biden administration recently issued an executive order on AI that requires the Commerce Department to develop guidelines for both content authentication and labeling of synthetic content.
Bruce MacCormack, chair of Project Origin and a member of the C2PA steering committee, told IEEE that the big AI companies started down the path toward content credentials in mid-2023, when they signed voluntary commitments with the White House that included a pledge to watermark synthetic content. “They all agreed to do something,” he notes. “They didn’t agree to do the same thing. The executive order is the driving function to force everybody into the same space.”
Later this year, the CBC aims to debut a content-credentialing system that will be visible to any external viewer using a type of software that recognizes the metadata.
Tessa Sproule, the CBC’s director of metadata and information systems, explains: “It’s secure information that can grow through the content life cycle of a still image,” she says. “You stamp it at the input, and then as we manipulate the image through cropping in Photoshop, that information is also tracked.”
The BBC has also been running workshops with media organizations to talk about integrating content-credentialing systems. Recognizing that it may be hard for small publishers to adapt their workflows, Ellis’s group is also exploring the idea of “service centers” to which publishers could send their images for validation and certification; the images would be returned with cryptographically hashed metadata attesting to their authenticity.
But none of these efforts will have much impact if social media platforms don’t enact something similar. After all, as the IEEE notes, viewers are more likely to trust an image — validated or not — on the BBC than they are on Facebook.
Meta is reportedly very engaged on this issue down to the practicalities of the additional compute requirements needed for content watermarking.
“If you add a watermark to every piece of content on Facebook, will that make it have a lag that makes users sign off?” says Claire Leibowicz, head of AI and media integrity for the Partnership on AI.
However, she expects regulations to be the biggest catalyst for content-credential adoption.
If social media platforms are the end of the image-distribution pipeline, the cameras that record images and videos are the beginning. In October, Leica unveiled the first camera with built-in content credentials; C2PA member companies Nikon and Canon have also made prototype cameras that incorporate credentialing.
But hardware integration should be considered “a growth step,” says Microsoft’s Jenks. “In the best case, you start at the lens when you capture something, and you have this digital chain of trust that extends all the way to where something is consumed on a Web page,” he says. “But there’s still value in just doing that last mile.”
Synthetic images and videos, are “going to have an impact” in the 2024 US presidential election, warned Jenks. “Our goal is to mitigate that impact as much as possible.”
Two misinformation experts explain why you’re not that good at detecting fake videos and how you can develop the power to resist them.
January 8, 2024
Posted
January 7, 2024
Yep, Every Business Is Now a Content Business
TL;DR
Creator collaborations are more than just a trend; they are a key component of any successful modern marketing strategy, says Cristy Garcia, chief marketing officer at impact.com.
When every brand is also now a content business, brands can’t just rely on providing a good product or service, she says. They also need to genuinely engage their audience, which means working with the right creator in the right way.
Branding expert David Harris stresses the importance of authenticity when choosing a creative partner, urging brands to strike a balance between brand objectives and a creator’s unique vision.
It should come as no surprise to digital marketers that content creators are a necessary partner to reach key audiences, nor that authenticity and trust are the foundations on which successful partnerships are built.
Nonetheless, the point is worth emphasizing given the importance to both sides — brands and creators — of building the creator economy.
According to data from impact.com, some brands are now reaping over 28% of their total company revenue through partnerships, while those committed to what the company calls the “partnership economy” over the long term are achieving year-on-year revenue channel growth exceeding 50%.
With the rise of influencers, content creators, and the overall creator economy, it seems everyone (journalists, major brands, social platforms, marketers) has since recognized the power of creators in directly and authentically reaching and engaging larger audiences than any static ad ever could, Cristy Garcia, chief marketing officer at impact.com, says.
Creating the Partnership Economy
Crucially for marketers, the creator economy is remarkably effective in driving campaign performance. Nearly a third of all social media users discover new products through influencers, and Goldman Sachs suggests that, by 2027, the creator economy will nearly double in size from $250 billion to a potential $480 billion globally — “a statistic marketers cannot afford to ignore,” emphasizes Garcia.
She adds, “it is evident that the creator economy, with its adept utilization of videos, still has untapped potential.”
Leveraging Your Influence(r)
Creators may be experts in the respective domains and are “an exceptionally trusted resource” for consumers, but that doesn’t mean that they will all make perfect matches for brands.
“Just because an influencer may be good at promoting businesses doesn’t mean they’re a fit for yours,” says Garcia. “You’ll want to find partners with aligned audiences and ones who match your brand’s values.
Rather than squandering resources on “broad, spray-and-pray marketing campaigns,” she adds, brands are natively introduced to a new or shared audience authentically.
“For the best ROI, we recommend paying your creator partners not just a flat fee or performance-based payouts but combining both of them to ensure your creator is happy with the arrangement and so is your budget.”
In the past, brands would simply pick the top celebrity and ask them to push their product. Nowadays, consumers are much more savvy, and this simply won’t fly.
“Consumers are becoming increasingly aware of marketing tactics,” branding expert David Harris notes. “We cannot stress the importance of authenticity when choosing a creative partner.”
Harris has some advice to brands wanting to work with creators. In essence, collaboration is key.
“If your brand chooses to work with creators, you need to strike a balance between your brand’s objectives and the creator’s unique vision,” he urges. “The spirit of collaboration needs to be at the heart of everything you do together.”
Jaron Lanier: Is Data Dignity the Answer for Regaining “Control” of AI?
TL;DR
Jaron Lanier, an influential technologist who also works for Microsoft, explains what it means to apply data dignity ideas to artificial intelligence.
Lanier argues that large-model AI can be reconceived as a social collaboration by the people who provide data to the model in the form of text, images and other modalities.
In thinking of AI this way, he suggests new and different strategies for the long-term economics of AI, as well as approaches to key questions such as safety and fairness.
Jaron Lanier, an influential computer scientist who works for Microsoft, wants to calm down the increasingly polarized debate about how we should manage artificial intelligence.
In fact, he says, we shouldn’t use the term “AI” at all because doing so is misleading. He would rather we understand the tech “as an innovative form of social collaboration.”
He set out his ideas in a piece published in The New Yorker, “There Is No AI,” and elaborated on them further in a conversation recorded for University of California Television (UCTV), “Data Dignity and the Inversion of AI,” co-hosted by the UC Berkeley College of Computing, Data Science, and Society and the UC Berkeley Artificial Intelligence Research (BAIR) Lab.
Lanier is also an avowed Humanist and wants to put humans at the center of the debate. He calls on commentators and scientists not to “mythologize” a technology that is actually only a tool.
“My attitude doesn’t eliminate the possibility of peril: however we think about it, we can still design and operate our new tech badly, in ways that can hurt us or even lead to our extinction. Mythologizing the technology only makes it more likely that we’ll fail to operate it well — and this kind of thinking limits our imaginations,” he argues.
“We can work better under the assumption that there is no such thing as AI The sooner we understand this, the sooner we’ll start managing our new technology intelligently.”
A Tool for Social Collaboration
So if the new tech isn’t true AI, then what is it? In Lanier’s view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.
AI is just a computer program, albeit one that mashes up work done by human minds.
“What’s innovative is that the ‘mashup’ process has become guided and constrained, so that the results are usable and often striking,” he says.
“Seeing AI as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious — less like Hal 9000,” he contends.
Delivering on “Data Dignity”
It is hard but not impossible to keep track of the input of humans into the data sets that an AI uses to create something new. Broadly speaking, this is the idea of “data dignity,” a concept doing the rounds among the computer scientist community for helping us out of the impasse when it comes to AI’s ability to work for us, not against us.
As Lanier explains, “At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image. Big-model AI is made of people — and the way to open the black box is to reveal them.”
Such “data dignity” appeared long before the rise of big-model AI as an alternative to the familiar arrangement in which people give their data for free in exchange for free services, such as internet searches or social networking. It is sometimes known as “data as labor” or “plurality research.”
“In a world with data dignity, digital stuff would typically be connected with the humans who want to be known for having made it. In some versions of the idea, people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do.”
He acknowledges that some people will be horrified by the idea of capitalism online, but argues that his strategy would be a more honest capitalism.
Nor is he blind to the difficulties involved in implementing such a global strategy. It would require technical research and policy innovation.
Yet, if there’s a will, there will be a way, and the benefits of the data-dignity approach would be huge. Among them, the ability to trace the most unique and influential contributors to an AI model and to renumerate those individuals.
“The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models,” he caveats. “Over time, though, more people might be included, as intermediate rights organizations — unions, guilds, professional groups, and so on — start to play a role.”
People need collective-bargaining power to have value in an online world — and that’s a loophole he doesn’t address. Lanier’s humanist side gets the better of him as his imagination runs to liberal thinking.
He continues, “When people share responsibility in a group, they self-police, reducing the need, or temptation, for governments and companies to censor or control from above. Acknowledging the human essence of big models might lead to a blossoming of new positive social institutions.”
There are also non-altruistic reasons for AI companies to embrace data dignity, he suggests. The models are only as good as their inputs.
“It’s only through a system like data dignity that we can expand the models into new frontiers,” he says.
So it is in Silicon Valley’s interest to remunerate humans whose data they collect, in order to then create better and bigger AI models that have an edge over competitors.
“Seeing AI as a form of social collaboration gives us access to the engine room, which is made of people,” says Lanier.
Context, Not Apocalypse
He doesn’t deny there are risks with AI, but he also doesn’t subscribe to the most apocalyptic end of species scenarios of some of his peers. Addressing the issue of deepfakes, the misuse of AI by a bad actor, he gives a stark example of how data dignity might come to the rescue.
Suppose, he says, that an evil person, perhaps working in an opposing government on a war footing, decides to stoke mass panic by sending all of us convincing videos of our loved ones being tortured or abducted from our homes. (The data necessary to create such videos are, in many cases, easy to obtain through social media or other channels.)
“Chaos would ensue, even if it soon became clear that the videos were faked. How could we prevent such a scenario? The answer is obvious: digital information must have context. Any collection of bits needs a history. When you lose context, you lose control.”
Tech visionary and Microsoft “Prime Unifying Scientist” Jaron Lanier calls for regulation in AI, in the interest of society… and Big Tech.
January 3, 2024
Posted
January 3, 2024
“May December” and Its Many Mirrors: How the Cinematography Tilts and Shifts Perception
TL;DR
Netflix’s “May December,” the new film from director Todd Haynes, examines how little people understand about how they appear to others.
The story of real-life teacher Mary Kay Letourneau, who married the much-younger object of her desire after serving time for second-degree rape of a child, serves as a springboard for the film.
As with his other films, Haynes employs stylistic techniques to remind audiences that they are watching an artistic construct.
The film was shot using ARRI’s new Alexa 35 by DP Christopher Blauvelt rather than longtime Haynes collaborator Edward Lachman.
Blauvelt appreciated the increased latitude of the Alexa 35, which allowed him to shoot in bright daylight without overexposure.
Few people really know what they sound or look like or how they appear to others. That is among the essential ideas that director Todd Haynes’ new film on Netflix, May December, examines. The film concerns Gracie (Julianne Moore), a woman who has married and raised children with Joe (Charles Melton) — a pairing that began when she was already married and in her 30s and he was just 13. We quickly learn that their relationship landed her in prison and resulted in a national scandal similar to that of real-life teacher Mary Kay Letourneau, who married the much-younger object of her desire after serving time for second-degree rape of a child. May December is clearly meant to summon audiences’ memories of Letourneau’s situation as its springboard.
The story kicks off when Elizabeth (Natalie Portman), an actress set to portray Gracie in an indie film, spends time with Gracie, Joe and their children in an attempt to absorb useful information about the character she’s scheduled to portray. Gracie sees this as an opportunity to shape Elizabeth’s portrayal into the sympathetic portrayal she feels is truthful, while Elizabeth is focused on gathering any detail, the more sordid the better, that she can use in her performance.
As is almost always the case with Haynes’ filmmaking, the director isn’t looking to make the audience feel like they’re watching something “real,” or to forget that they’re watching a movie. Instead, he searches for ways to remind audiences that they are watching an artistic construct, whether by using the stylistic techniques of a ‘50s melodrama in Far from Heaven, the approach of a sensationalist exposé, as in his well-known short Superstar, in which a Barbie doll stands in for singer Karen Carpenter, or a mid-century romance in Carol.
As Haynes told American Cinematographer back in December 2002, following the release of Far from Heaven, which was stylized to look like a Douglas Sirk “women’s picture” of the 1950s: “I think the best movies are the ones where the limitations of representation are acknowledged, where the filmmakers don’t pretend those limitations don’t exist. Films aren’t real; they’re completely constructed. All forms of film language are a choice, and none of it is the truth. … We’re not using today’s conventions to portray what’s ‘real.’ What’s real is our emotions when we’re in the theater. If we don’t have feeling for the movie, then the movie isn’t good for us. If we do, then it’s real and moving and alive.”
Among the techniques he uses in May December to achieve this type of aesthetic distancing: scenes framed by mirrors; coastal Georgia locations captured through large amounts of diffusion; and grain laid in during post for additional texture and a seemingly discordant music track, repurposed from 1971’s torrid romance The Go-Between from director Joseph Losey and writer Harold Pinter.
Distancing techniques notwithstanding, the film also avoids traditional filmmaking tropes that cue the viewer about how to feel about what they’re watching. Could there be some truth in Gracie’s notion that her relationship was borne of true love and evolved into normal family life? Is Elizabeth’s attempt to cut through Gracie’s public front a search for truth or just another form of exploitation? In that way sense May December could be said to be this year’s Tár, with final judgement ultimately left to the viewer.
As Haynes summarizes in his director’s statement from Netflix, “All lives, all families, are the result of choices, and revisiting them, probing them, is a risky business. But it’s hard to think of more volatile romantic choices than these, and all the more so when so many defenses have been called upon to shut out such unanimous contempt and judgment from the world.”
He adds, “But as Elizabeth observes and studies Gracie and her world, and gets to know her husband Joe, her reliability as narrator begins to falter. The honest portrait she hopes to erect, her own investment in revealing truths, becomes clouded by her own ambitions and presumptions, her own denials.”
Originally intent on working with frequent collaborator Edward Lachman to shoot the film, Haynes’s plans were disrupted when the cinematographer of his award-winning features, including Far from Heaven, Carol and I’m Not There, suffered an injury that prevented Lachman’s participation. Cinematographer Christopher Blauvelt, who knew both Haynes and Lachman quite well, stepped in during the brief prep period and handled the cinematography for the rapid-paced 23-day shoot in and around Tybee Island, just outside of Savannah, Georgia.
In conversation with playwright Jeremy O. Harris at the New York Film Festival, Haynes recounted that he had no compunction about working with Blauvelt, who’d shot several of the films of respected indie director Kelly Reichardt, including First Cow, Showing Up and Certain Women. “Kelly Reichert is a dear, dear friend and one of the great independent filmmakers working in the world today. She, her last, what, five, six films were all shot by Chris Blauvelt… I’ve known Chris for years because he worked under Harris Savides, who was one of the greats.”
It had already been determined that May December would be shot with ARRI’s new Alexa 35, which offered some features that would be conducive to the project’s fast pace and limited lighting crew.
“I was immediately interested in this camera because [I understood] that the latitude was even more the [previous Alexas] and I had never used it before,” Blauvelt explains to Nick Newman at The Film Stage. When I went to test it at Keslow Camera in Atlanta… we were in a warehouse with a giant door, and I had a person in there that I was shooting for my tests with some string lights and a chart and the other things you have at a camera test. But I had this door open, so I had sunlight out of the back, and I kept opening and opening and opening that door and [the camera] maintained [definition in] the clouds, like, forever!” he says.
“It felt like I couldn’t clip. I couldn’t make it overexposed! So that was what I needed. And I was really happy to have that that much latitude because going into [this] film, knowing that I would be stuck in bright daylight without the tools to slow things down. It was a tremendous help.”
As Blauvelt explained to Newman, “I think there’s a big interest in finding these older, beautiful [lenses] that we used to use because the digital can be super-clinical. You know high definition is not flattering if you shot everything clean and right to your sensor. You’re looking, now, at pores on skin and it doesn’t lean into a ‘cinematic look’ — like from the past — that we all are inspired by and love. So, there’s people that have been rehousing these old lenses to match all of our gears and make them more user-friendly.”
Of the 1930s era Balter glass, he says, “You can’t crack them open because it’s toxic — like poisonous gas — because they’ve been encapsulated for so long, and the materials they use was, like, pine tar to make the gears work. And so what they do now is: they cover them. Like the rehousings are just built over the old lenses. So, you can look at a lens that’s built to be this big, to be user-friendly with big marks and everything for the focus and aperture, and you look inside, and the lens is, like, this big. [Spreads hands]
“There’s a characteristic of each lens, right? Like, we tested Cooke Panchros; we tested Super Baltars, normal Baltars, Cooke S4s — which are the most contemporary ones I would use. But even still: I say that and it’s funny because those lenses were made 35 years ago. [Laughs] But those, to me, are as sharp as I’ll get because I’m always trying to find a way to sort of disarm the eye for perfection of digital.”
Blauvelt spoke to Vanity Fair’s David Canfield about Haynes’ desire to avoid crisp, clinically clear images for his brand of visual storytelling.
When he got to the location, he recalls, “Todd was showing me all these images and there was this inherent sea-worn glass, this sort of haziness on things because of the ocean air. I could tell that was just a natural occurrence. It reminds me of Todd. Todd has this old, really shitty phone, and he would take a photo of a set with it, and it would already look like that. [Laughs]
“So they were showing me images already discolored — it just became this throughline. This very texturized filmic look comes from a lot of the inspirations that Todd had already had. To me, we were all on the same page in regard to finding these places and these frames and the way we lit.”
Further elaborating to ICG Magazine, Blauvelt says, “We wanted it to feel texturized. We wanted to give the feeling of this place where the windows are covered in a marine layer, and there’s all this haze, and sunlight warming things, and leaving moisture between window panes. We embraced it and never cleaned a window. We were shooting through screen and brush, which helped to give a filmic look.”
Valentini also reports that Blauvelt made use of heavy diffusion from Schneider Optics Radiant Soft filters in front of the lens, in strengths from 1/4 up to five, sometimes stacking more than one for the right effect.
Another feature of May December involves the use of mirrors to frame the action, concurrently enhancing the thematic elements of characters’ inherent limitations of seeing themselves and others accurately, and also adding more of those layers of distance between the viewer and the characters.
In portions, the camera takes the position of a bathroom mirror in which Elizabeth studies Gracie’s approach to applying makeup. In one scene, Elizabeth delivers an extended monologue into a mirror, again with the camera pointing at her. Shots like these simply use the proscenium as if it’s a mirror and the actors perform directly to the lens. Some other scenes that actually show mirrors within the shot were more complex to execute.
In a scene that has been widely referenced in articles and reviews, Gracie’s daughter Mary (Elizabeth Yu) tries on dresses to wear to her high school graduation. Gracie and Elizabeth sit outside the store’s dressing rooms and in an extended oner, the camera is pointed directly at the two women sitting side-by-side surrounded by mirrors and framed to show Elizabeth surrounded by both Gracie and, on her other side, Gracie’s reflection. The dramatic point of the scene is to observe Gracie’s offhanded and crushing response to her daughter’s modelling the sleeveless dress she wants. But acquiring the shot as envisioned presented the problem of hiding a camera pointing directly at a mirror.
To accomplish this, Blauvelt, the camera and crew were placed behind a two-way mirror — one that is a typical mirror on one side and clear on the other. Haynes explains to Vanity Fair, “The challenge was how to hide the camera, and which angles the mirrors were going to be; when you have any mirror on any set, it’s difficult because you’re hiding lights and stands and everything. I always stare at the little vanity over Natalie’s shoulder because that’s where the camera is hidden. Also, it’s great conceptually. When I watch the film and see how it works and integrates into our multiplicity of what’s happening within the story, it makes so much sense. Your eye can go in any direction. We play it mostly as a one-er, and so it relies a lot on their performances, which are just immaculate.”
Haynes elaborates to Adam Chitwood at TheWrap, saying his initial idea for the shot was much simpler, but it evolved from there. The performers are surrounded by mirrors and the camera had to be positioned just right so it wouldn’t catch any errant reflections of the set or crew. It was one of the most complicated scenes in the entire shoot, and Blauvelt said it was a true team effort to nail it.
“It’s not exclusive to me, or even the departments, it’s like a collective that goes all the way back to the genius of the writing, and the characters, and Julianne Moore and Natalie Portman and Elizabeth Yu,” he continues. “When that happens, and all the pistons are firing and you know that we got there from everybody really understanding the intent and building something like that, it’s the best feeling you can have as a filmmaker.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
January 3, 2024
“The Zone of Interest:” Ways to Film the Unfilmable
TL;DR
Director Jonathan Glazer went to great lengths to pursue an immersive naturalism in his screen depiction of the holocaust, “The Zone of Interest,” removing the artifice and conventions of filmmaking.
The filmmakers gave their actors freedom to improvise by rigging multiple cameras for long takes with the actors often unaware if the cameras were even rolling.
Cinematographer Łukasz Żal also deployed a thermal imaging camera to juxtapose black and white scenes of energy and hope with the bleak world of color.
Writer-director Jonathan Glazer refuses to be drawn into making comparisons with other Holocaust screen depictions and his new film TheZone of Interest.
“I don’t like getting involved in a genocide-off,” he told Giles Harvey of The New York Times, commenting on repeated attempts by the press to get him to talk about why he felt his approach differs from the likes of Schindler’s List, Son of Saul or the documentary Shoah.
He went on to clarify that his decision to tackle this highly sensitive subject was rooted in his family history. Glazer’s grandparents were Eastern European Jews who fled the Russian Empire in the early 20th century. He also went to a Jewish state school in London.
The British director had not yet finished Under the Skin in 2013 when he told his longtime producer, James Wilson, about his idea for the next project.
“He did not want to do another, quote-unquote, ‘Holocaust movie.’ Jon has a very small filter when it comes to doing something that’s never been done before,” Wilson told Rolling Stone’s David Fear. “But neither of us knew what that something would be.”
Glazer’s idea was galvanized in 2014 by reading about the latest novel by the late Martin Amis, a story told from the viewpoint of a fictional Nazi commandant who ran and lived next door to a concentration camp in Nazi Germany.
In Amis’ book, the Dolls were loosely based on Rudolf Höss, the real-life commandant of Auschwitz, and his family. Glazer’s first big call was to revert to the originals. Before starting work on the script, he spent two years researching the Hösses, during which he came across a staggering data point: The garden of their villa shared a wall with the camp. What feats of denial, he wondered, would it have taken to live in such proximity to the damned?
“I wanted to dismantle the idea of them as anomalies, as almost supernatural. I wanted to show that these were crimes committed by Mr. and Mrs. Smith at No. 26,” he told the NYT.
“I looked at the darkening world around us, and had a feeling I had to do something about our similarities to the perpetrators rather than the victims,” Glazer elaborated to Rolling Stone. “When you say, ‘They were monsters,’ you’re also saying: ‘That could never be us.’ Which is a very dangerous mindset.”
Instead, he began to see the Hösses as “non-thinking, bourgeois, aspirational-careerist horrors” who’d simply normalized evil.
‟There were two givens on the film. … That it would be in its native languages — German and some Polish, obviously — and that we would film it in Poland. And Jon really wanted to film it in the real house,” Producer James ‟Jim” Wilson told Gold Derby’s Charles Bright.
Wilson describes visiting the Höss family home and experiencing its proximity to Auschwitz (which he says is ‟kind of holy”) as generating ‟one of the lightbulb moments.”
“The Höss house is still there, and the proximity to the camp is striking,” Glazer told The Hollywood Reporter’s Scott Roxborough. “I imagined myself at one point as a prisoner, imagining hearing the sounds of the Höss children splashing and laughing in their swimming pool on the other side of the wall. The idea of the film became about that wall, about how that wall is a direct manifestation of how we ourselves as human beings compartmentalize the things we were happy to indulge in and surround ourselves with and the things — sometimes horrible things — we want to disassociate ourselves from. That became the axiom of the whole endeavor.”
Because the Höss home is the heart of the film, Glazer tasked Production Designer Chris Oddy with building a fully functional set. Oddy deemed the actual building’s “80 years of decrepitude” insurmountable and instead opted to renovate a nearby building crafted in a similar style, according to Roxborough’s reporting. Glazer mandated that the end result should be as if it had been built yesterday.”
In fact, Oddy says, “The only scene shot in the original house comes late in the film, where Rudolf walks from his office through the real underground tunnel that connects the camp with his home.”
Another key set piece, the family garden, required a full-year head start to adequately mature before filming began, Oddy says.
The production goal was an immersive naturalism, and Glazer went to great lengths pursuing it, telling Vanity Fair’s David Canfield that he sought to “remove the artifice and conventions of filmmaking that lead you down a road which didn’t feel relevant here: screen psychology. The way that cinema fetishizes, glamorizes, empowers—in this context, none of those were appropriate.”
Instead, as Fear notes in Rolling Stone, The Zone of Interest uses suggestion and sound — what Glazer refers to as “ambient evil” — to conjure up how human beings could look upon the methodical killing of other human beings as background noise in their lives rather than a profound tragedy.
Oddy’s recreation of the Höss home at Auschwitz was rigged with 10 hidden cameras that would roll simultaneously.
“Cinema is at odds with atrocity,” he explained to the NYT. “As soon as you put a camera on someone, as soon as you light them, or make a decision about what lens to use, you’re glamorizing them.”
Cinematographer Łukasz Żal (Ida and Cold War) made some initial studies of the house. Glazer told him they were “too beautiful.” He wanted the images to seem “authorless.”
The two-time Oscar-nominated Polish DP explained to John Boone at Aframe, “You have to forget completely all the tricks you’re carrying with you as a cinematographer and all your knowledge and everything you were taught in your career. The whole idea of this film was just to put the cameras in the places where you can see what is happening in the most objective way, and that’s it. Very simple.”
This method gave the actors — principally Christian Friedel (Babylon Berlin) as Rudolf and Sandra Hüller (Anatomy of a Fall) as his wife, Hedwig — the freedom to improvise; they were often unaware if the cameras were even rolling.
Hidden Cameras
“There was nobody on set [except] the actors,” Żal informed Aframe. He remotely monitored from a shipping container outside the house. “The actors were just living their lives, and because we had 10 setups at the same time and were shooting everything, after one or two hours we had all the setups we needed. We had continuous action, and the sun was changing, the light was changing, the clouds were coming and going, the dog was running through the house, and everything was captured on the 10 cameras.”
Sony VENICE cameras were partly chosen because they could have their lenses detached from the camera body in order to be able to hide them all over the set, including in the garden, which is the Hösses’ pride and joy.
“The whole idea was to create a space for actors to just be there, be in the situation and for us to witness with no interruption,” Żal told fellow DP Mandy Walker in an interview for the ASC. “We were able to attach those cameras to the walls, hide them in cardboard, in the bushes, try and cover them with fabric. We’re just placing them in a different spaces in the garden, in the house. Everything was hardwired. The five focus pullers were in the basement of house, and we were in the shipping container behind the wall. That was our mission control and we were just shooting the whole scene without interruption, continuously.”
The film intersperses the desaturated color of the Höss family life with stark black and white footage of a girl leaving apples in a work camp for the Jewish prisoners. This effect was captured at night using a special thermal imaging camera, as Żal explained to Vanity Fair.
“We spent a lot of time adjusting this for filming in terms of focus and image and also software, because It’s not so easy to use this camera and get the image we would like to have.”
Glazer explains that the camera is recording heat, not light, adding, “there’s something very beautiful and poetic about the fact that it is heat, and she does glow. It reinforces the idea of her as an energy.”
In post, they used an algorithm to upgrade the resolution from lowly 1K to 4K and to match as well as they could with the 6K resolution footage from the VENICE.
While the visuals deliberately refrain from showing the inside of the extermination camp, the audience is not spared the harrowing sounds emanating from behind the wall. Neither are the Hösses, even though for them life seems to continue as normal.
Sound designer Johnnie Burn recalls to Aframe the original directive from Glazer. “He said, ‘It is going to be mandatory that we don’t go in the camp, and we don’t see the atrocities. We’re going to just hear them. It will all be sound.’ I panicked. I started reeling, because I realized that would be a leap of faith. And also, where are we going to get the sounds from? Jon said, ‘Well, that’s what you’re going to figure out.’”
Burn compiled a 600-page research document and spent the year before filming began and throughout the shoot and into post-production building the sound library with his team. According to Aframe, he recorded the industrial rumble of textile workshops and incinerators, boots marching on gravel, period-accurate gunfire, and death itself.
“It’s the sound of murder, and it has to be credible but we didn’t want to be sensational,” Burn said. “Anything sensationalized in the sound wouldn’t work, so understanding the difference between someone acting pain and actually being in pain at point of death, that’s to do with literally the cadence of the way people scream.”
The idea was to create an immersive experience where performances were able to transform into people going about their daily routines, and the cast was free not only to explore the environments but lean into boring, mundane everyday life — a contrast to the horror literally happening outside their backyard.
“Some takes were up to 45 minutes,” Sandra Hüller told Rolling Stone. “You didn’t know what was being filmed from what angle, or from where. The crew and monitors were in a separate building, so if they didn’t tell us to cut, we’d just restart a scene and it would end up being completely different.”
It was a concept that Glazer hoped to make explicit with the film’s ending, in which viewers are momentarily dropped into Auschwitz in the 21st century — a flash-forward that the director says came from his experience wandering around the grounds one morning and noticing the cleaning crew picking up litter and vacuuming in front of the exhibits.
“It was like they were tending graves,” Glazer says. “You know, Höss is long gone. He is ash. But the museum, and the importance of such museums, they are still there.”
Oscar-winning director Steve McQueen’s four-hour documentary feature, “Occupied City,” provides two simultaneous portraits of Amsterdam.
January 2, 2024
Jim Louderback: Want To Grow Your Creator Business? Get a COO
TL;DR
Creator COO will be the new big job category for the creator economy over the next decade.
Dedicating a part- or full-time team member to operations not only improves a creator’s short-term outcomes, but will be crucial in building a business that lasts.
Business mentor Jim Louderback shares how to determine whether to hire an operations expert and, if so, which tasks to offload.
He recommends considering what a creator could do with additional capacity, what the creator’s long-term vision is, and what structure needs to be in place for growth.
Want to know more? See Jim Louderback in person at NAB Show in the Creator Lab, a new show floor experience centered on the creator economy.Register now!
As the creator economy matures, the business structures of individual content creators are starting to look a lot like those of conventional startups with formalized rolls for positions like CEO, CFO and even HR.
The starting point is that every creator is building a media business and that by delegating responsibility to employees the creator can capitalize on the chance to generate more revenue than they could manage alone.
“I think it fundamentally comes down to how do you scale your business as a creator, right? You can take it only so far as an individual with maybe a couple of part time people working with you.”
At the same time, he said, “The revenue options have multiplied. There are a lot of opportunities to do things outside of the platforms to create revenue, whether it’s books, or merchandise, or courses. And the number of people taking advantage of it have multiplied.”
Because more and more creators and influencers are sitting at the center of the marketing funnel, the question is how do you take advantage of that and build a bigger business?
Louderback also points to another dynamic which is the lifecycle of a creator. Like elite athletes, there may only be a sweet spot of a few years where the creator can maximize their potential with an audience.
“Five to seven years, maybe, for a YouTuber. For TikTok it might be more like one to three years. But whatever it is, as a creator with some success, you’re thinking, how long can I last? And what do I do afterwards? Can I build something that exists without me having to be at the center of it all the time?”
He argues that, if creators are not already doing so, they should be casting around for a COO to help structure their business and move it forward.
“When you realize that you need to scale and you can’t do it all yourself, you have to ask yourself the hard questions. What are the things that you like doing? And what are the things you don’t like doing? What are the things that you’re comfortable delegating and what are the things you’re not comfortable delegating? Can you find people that fit those roles that you trust?”
Louderback acknowledges that most creators are “lone wolves, outdoor lions in the middle of the savanna” who find it “really hard to open up and bring people in that they trust.”
Nonetheless, Louderback urges creators to think of themselves as CEOs of their own startup.
“As a CEO your job is to hire the right people to make sure you have enough funding, to keep the company going and to set the direction of the business. That’s the job as a CEO whether of a traditional tech startup, or as CEO of a creator organization.”
He advises CEOs to look at all the aspects of a classic business and how it applies to them. That means thinking about marketing, sales, finance, production and HR, your workspace infrastructure, and your technology infrastructure.
“Put them all on a board and think about how much time and percent of your work week [this takes] and think to yourself, if I had somebody else doing this, what more could I do? And how much more revenue could I bring in? And then think about your vision for where you want to go as a creator business? Where you want to be in three years and five years and 10 years? And think, do you think you’re gonna get there with what you have? And if the answers to all of those points to ‘I need help,’ then that’s when you probably need to bring in somebody in an operational role to manage those things for you.”
The operational leader of the company essentially needs to make sure that they are expanding the CEO’s ability to get the job done. The COO role can evolve with the business, growing from a fractional role to a full-time position or even multi-person team.
Of course, the best operators won’t come cheap and it will probably mean a creator giving away equity in their business. Louderback maintains this will be worth it when creator’s realize the increased revenue and value that a COO brings that would otherwise be left on the table.
“It’s a bridge to get over because in the end, it’s not an equity in you it’s an equity in the business,” he said. “Having a strong operational partner for a creator is going to give you much better likelihood of success [to millions of dollars in annual revenue]. It’s a very untapped opportunity and there is a lot of wealth being created.”
He predicts billion dollar companies emerging in the Creator economy with creators as the leads or as the focus of those companies.
“I think we will see financial structures and capital put to work to help build these billion dollar companies.”
But the future of entertainment extends well beyond Hollywood. Social media creators — otherwise known as influencers, YouTubers, TikTokers, vloggers and live streamers — entertain and inform a vast portion of the planet.
For the past decade, we’ve mapped the contours and dimensions of the global social media entertainment industry. Unlike their Hollywood counterparts, these creators struggle to be seen as entertainers worthy of basic labor protections.
Platform policies and government regulations have proved capricious or neglectful. Meanwhile, creators’ bottom-up initiatives to collectively organize have sputtered.
Living on the Edge
Industry estimates regarding the size and scale of the creator economy vary. But Citibank estimates there are over 120 million creators, and an April 2023 Goldman Sachs report predicted that the creator economy would double in size, from US$250 billion to $500 billion, by 2027.
According to Forbes, the “Top 50 Creators” altogether have 2.6 billion followers and have hauled in an estimated $700 million in earnings. The list includes MrBeast, who performs stunts and records giveaways, and makeup artist-cum-true crime podcaster Bailey Sarian.
The windfalls earned by these social media stars are the exception, not the norm.
The venture capitalist firm SignalFire estimates that less than 4% of creators make over $100,000 a year, although YouTube-funded research points to a rising middle class of creators who are able to sustain careers with relatively modest followings.
These are the users who find themselves most vulnerable to opaque changes to platform policies and algorithms.
Platforms like to “move fast and break things,” to use Meta CEO Mark Zuckerberg’s infamous expression. And since the creator economy relies on social media platforms to reach audiences, creators’ livelihoods are subject to rapid, iterative changes in platforms’ features, services and agreements.
Yes, various platforms have introduced business opportunities for creators, such as YouTube’s advertising partnership feature or Twitch’s virtual goods store. However, the platforms’ terms of use can flip on a switch. For example, in September 2022, Twitch changed its fee structure. Some streamers who were retaining 70% of all subscription revenue generated from their accounts saw this proportion drop to 50%.
In 2020, TikTok, facing rising competition from YouTube Shorts and Instagram reels, launched its billion-dollar Creator Fund. The fund was supposed to allow creators to get directly paid for their content. Instead, creators complained that every 1,000 views only translated to a few cents. TikTok suspended the fund in November 2023.
Bias as a Feature, Not a Bug
The livelihoods of many fashion, beauty, fitness and food creators depend on deals brokered with brands that want these influencers to promote goods or services to their followers.
Yet throughout the creator economy, people of color and those identifying as LGBTQ+ have encountered bias. Unequal and unfair compensation from brands is a recurring issue, with one 2021 report revealing a pay gap of roughly 30% between white creators and creators of color.
Along with brand biases, platforms can exacerbate systemic bias. Creator scholar Sophie Bishop has demonstrated how nontransparent algorithms can categorize “desirability” among influencers along lines of race, gender, class and sexual orientation.
Then there’s what creator scholar Zoë Glatt calls the “intimacy triple bind”: Marginalized creators are at higher risk of trolling and harassment, they secure lower fees for advertising, and they are expected to divulge more personal details to generate more engagement and revenue.
Couple these precarious conditions with the whims and caprices of volatile online communities that can turn beloved creators into villains in the blink of a text or post, and even the world’s most successful creators live on a precipice of losing their livelihoods.
Rumblings of Solidarity
Unlike their counterparts in the legacy media industries, creators have neither taken easily nor well to collective action as they operate from their bedrooms and fight for more eyeballs.
Yet some members of this creator class recognize that the bedroom-boardroom power imbalance is a bottom line matter that requires bottom-up initiative.
The Creators Guild of America, or CGA, which launched in August 2023, is but one of many successors to the original Internet Creators’ Guild, which folded in 2019. Paradoxically, CGA describes itself as a “professional service organization,” not a labor union, yet claims to offer benefits “similar to those offered by unions.”
There are other movements afoot: A group of TikTok creators formed a Discord group in September 2022 to discuss unionizing. There’s also the Twitch Unity Guild, a program launched in December 2022 for networking, development and celebration and includes a dedicated Discord space. In response to the rampant bias in influencer marketing, creator-led firms like “F–k You Pay Me ” are demanding greater fairness, transparency and accountability from brands and advertisers.
Twitch streamers are already seeing some of their organizing efforts pay off. In June 2023, after a year of repeated changes in streamer fees and brand deals, the company capitulated in response to the backlash of their top streamers threatening to leave.
None of these initiatives has yet attained the legal status of unions such as the Writers Guild of America. Meanwhile, efforts by the Screen Actors Guild-American Federation of Television and Radio Artists to recruit creators have proved limited. Legal scholar Sara Shiffman has written about how SAG-AFTRA provides creators with health and retirement benefits, but offers no resources to ensure fair and equitable compensation from platforms or advertisers. Nonetheless, while on strike, SAG-AFTRA threatened creators that partnered with studios with a lifetime ban from joining the union.
And despite these bottom-up efforts, the tech behemoths refuse to recognize creators’ fledgling organizations. When a union for YouTubers formed in Germany in 2018, YouTube refused to negotiate with it. Nonetheless, you’ll see companies trot out their biggest stars when they find themselves under regulatory scrutiny. That’s what happened when TikTok sponsored creators to lobby politicians who were debating banning the platform.
An Invisible Class of Labor
Meanwhile, most governments have failed to provide support for — or even recognition of — creator rights.
Within the US, creators “barely exist” in official records, as technology reporters Drew Harwell and Taylor Lorenz recently pointed out in The Washington Post. The US Census Bureau makes no mention of social media as a profession; it is invisible as a distinctive class of labor.
To date, the Federal Trade Commission is the only US agency to introduce regulation tied to the work of creators, and it’s limited to disclosure guidelines for advertising and sponsored content.
Even as the European Union has operated at the forefront of tech and platform policy, creators rate scant mention in the body’s laws. Writing about the EU’s 2022 Digital Services Act, legal scholars Bram Duivendvoorde and Catalina Goanta criticize the EU for leaving “influencer marketing out of the material scope of its specific rules,” a blind spot that they describe as “one of its main pitfalls.”
The success of the 2023 Hollywood strikes could be just the beginning of a larger global movement for creator rights. But in order for this new class of creators to access the full breadth of their economic and human rights — to borrow from the movie Jaws — we’re gonna need a bigger boat.
Continued growth will be driven by marketing through short-form videos on platforms like Instagram, TikTok, and YouTube.
January 4, 2024
Posted
January 2, 2024
M&E Technologists Look Towards 2024
TL;DR
NAB PILOT asked the NAB Technology staff and other broadcast industry luminaries for their predictions for 2024.
In 2024, broadcasters will continue to play a vital role in the local news ecosystem as data journalism becomes even more important. FM radio will see a digital power increase, with 2024 set to be a breakout year for terrestrial broadcast radio in connected cars.
After a year of observing developments in artificial intelligence, the broadcast industry will begin to leverage AI in new and innovative ways.
The adoption and development of new video compression technologies will enable eco-friendly and high-quality content delivery.
The ATSC 3.0 rollout will continue to ramp up, with broadcasters seeking to enter the wireless market.
The entire industry wants to know what broadcast technologists are most excited about and expect or hope to see in 2024. NAB PILOT, the National Association of Broadcasters’ innovation arm, asked the NAB Technology staff and a few other industry luminaries if they had anything to share. They absolutely delivered. As we look ahead into the new year, Here’s a compilation of predictions for 2024.
Data Journalism Grows
Chris Jansen, Head of Broadcast News Partnerships, Google, and PILOT Member
In 2024, broadcasters will continue to play a vital role in the local news ecosystem. As compute power becomes more cost efficient, data journalism becomes even more important. From using tools like Trends to discover what people are curious about, to tools like Pinpoint to mine through mountains of public documents with a few clicks, the stories communities need to know are out there.
2024 – the Year for an FM Radio Digital Power Increase
David Layer, Vice President, Advanced Engineering, NAB
NAB and its broadcaster partners have been studying, testing and advocating for relaxed FM radio digital power rules for over a decade. 2023 saw the release by the FCC of a Notice of Proposed Rulemaking that tentatively adopts the changes that we have been advocating for, including a modified FM radio digital power formula that will allow more stations to increase their power to -10 decibels below carrier (dBc), allowing for better signal penetration into buildings and better replication of a station’s analog coverage by the digital signal. All the pieces are now in place for the FCC to issue an Order adopting these changes in 2024.
AI Begins to Fuel Industry Growth
Jeremy Sinon, Vice President, Digital Strategy, Hubbard Radio, and Vice Chair, NAB Digital Officer Committee
In 2023, broadcasters and the world at large became aware of AI for the first time. I believe much of the last 12 months has been spent “cautiously observing” and scrambling to play defense. Defense that is absolutely necessary for us to keep playing.
However, it’s time for us to also play offense.
In 2024 I think we’ll start to see some smart, impactful real world solutions from broadcasters and from broadcast partners. Solutions that will enhance our efficiency, knowledge and profitability. With machine learning and AI, our small teams can be empowered to do more. Adding AI to our mix will help us automate what is currently manual, which will feel like adding an extra copywriter, an extra producer, an extra developer, etc., to our teams. We can also use this technology to help us learn more about our audiences, build strategies to super-serve them and target like audiences in our marketing.
We are embarking on a whole new chapter of our industry, and in 2024 we graduate from observing in the stands to playing on the field.
Broadcast Radio Dominates the Connected Car Media Landscape
Joe D’Angelo, Senior Vice President, Broadcast Radio and Audio, Xperi, and PILOT Member
2024 is setting up to be a breakout year for terrestrial broadcast radio in connected cars. Thanks to DTS AutoStage, the support of tens of thousands of radio stations and over 15 OEMs, hybrid radio is going mainstream. The industry will experience exponential growth in DTS AutoStage enhanced radio listening as more OEMs launch with the platform around the world, and the impact will be measurable thanks to our Broadcaster Analytics platform. Broadcasters will be able to better understand their audience and in the impact of their programming across a wide-ranging mix of cars, markets and formats. These insights and the enhanced user experience are sure to delight both advertisers and listeners in 2024 and for years to come.
“It was the best of times, it was the worst of times.”
John Clark, Senior Vice President, Emerging Technology, NAB
Everyone is talking about artificial intelligence, and everyone has an opinion. Usually, those opinions fall into one of two buckets: good and bad. We’ll continue to see examples that easily fit into those extremes. However, we’ll see more use cases emerge in the messy middle. The more we understand how the new technology can be used, the more we’ll wrestle with how it should be used and how to combat the ways it’s used inappropriately. This will force us to think about AI not as a binary choice of good or bad, and it will force us to put the technology to use in service to our audiences. Doing nothing is not an option.
Sustainable Media: Adoption and Development in Video Compression Technologies for Eco-Friendly and High-quality Content Delivery
Ling Ling Sun, Chief Technology Officer, Nebraska Public Media, and Chair, NAB Broadcast Engineering and IT Conference Committee
In my 2024 prediction, a notable increase in the adoption and development of more efficient video compression technologies is anticipated, aligning with the media industry’s sustainability goals. As videos currently constitute more than half of network traffic, the potential to halve this load through enhanced compression signifies a substantial stride toward eco-friendly practices. Beyond network traffic, compression technologies also play a crucial role in conserving essential data storage.
However, with the increasing efficiency of compression comes complexity. There is a clear demand for new chips capable of substantially reducing computational power requirements. If these chips effectively cut power consumption in half, we could witness more energy-efficient compression processes. Moreover, artificial intelligence is positioned to play a pivotal role in adaptive compression and beyond, promising more efficient semantic communication.
This ecological approach to developing a sustainable media industry is indispensable for enhancing user experiences with technologies such as Ultra High Definition (UHD) and immersive videos. Minimizing resource usage and delivering high-quality content are pivotal for both environmental consciousness and heightened user satisfaction in the dynamic media landscape. As bandwidth demands for high-quality content increase, the contribution of efficient compression becomes even more significant.
Radical changes are happening in numerous areas of the broadcast TV ecosystem, but perhaps most pointedly with the ATSC-1 platform transitioning to ATSC 3.0. Since its regulatory green light in 2017, ATSC 3.0 has progressed steadily in the marketplace, albeit at times a little unpredictably and at a varying rate of change. A quick Google search and a query to ChatGPT led me to understand that acceptance of major business changes typically follows an adapted version of the familiar Kübler-Ross stages of grief. For business changes, industries typically go through four stages of acceptance: denial, resistance, exploration and commitment.
With denial and resistance largely in the rearview mirror, over the coming year I think we are likely to see a mass movement of stakeholders migrate across the exploration–commitment boundary of the transition — and not look back. In practice, commitment to ATSC 3.0 means more than just infrastructure changes and ATSC 3.0 lighthouses going on the air. It means significant investment in enhancing programming and services that can benefit most from the new infrastructure and will most impress and attract consumers. Look for rollouts of High Dynamic Range (HDR) and native 1080p production programming, more audio options, compelling broadcast app implementations, enhanced emergency information services and new non-television use cases to emerge. Philosopher and author Jean-Paul Sartre once said in an interview that commitment is an act, not a word. 2024 should be the year of action on ATSC 3.0 commitment.
Broadcasters Enter the Wireless Market
Mike Kelley, Vice President and Chief Information Security Officer, The E.W. Scripps Company, and Chair, NAB Cybersecurity Advisory Group
2024 will be the year where broadcasters redefine their market position, delving into the realm of wireless data transmission previously dominated by telecoms and internet service providers. We will witness experimentation with ATSC 3.0, exploring use cases that capitalize on the benefits of a one-to-many architecture and identifying areas where hyper-localized precision is necessary. This one-two punch combination marries our industry expertise with a modern technology standard that offers a lifeboat for broadcasters to navigate treacherous economic headwinds.
ATSC 3.0 will establish the broadcast industry as a new entrant to the wireless data market, offering a wide array of novel services and engaging audiences in unprecedented ways. The new standard is not just an upgrade; it’s a strategic pivot that could redefine the broadcast industry’s role in a data-driven future.
Best of the Old and Best of the New
Roswell Clark, Executive Director, Radio Engineering, Cox Media Group, and Vice Chair, NAB Radio Technology Committee
2024 promises to continue the evolution and importance of the public facing benefits of radio ranging from AM in the dashboard to the advancement of features in hybrid radios. How broadcasters support the technologies to maximize these areas for consumers and public safety will interesting and exciting. 2024 also looks to be the year that advancements in the next generation of radio architecture will potentially close the few remaining gaps in technology solutions to provide greater reliability, management and scalability.
2024 Will Be Highly Dynamic
Sam Matheny, Chief Technology Officer, NAB
High Dynamic Range (HDR) will begin to be deployed at numerous television stations across the country. This technology brings a noticeable improvement to the picture quality in video and is enabled within the ATSC 3.0 standards. In layman’s terms, it provides brighter whites and deeper, darker blacks. And it does so without costing a fortune in bandwidth the way sending more pixels (think 4K) does. The Ultra HD Forum has demonstrated HDR/SDR single master workflows, and I think some of those efforts will translate into real-world deployments. I believe we’ll see many stations and perhaps multiple networks adopt some form of HDR technology to improve their NEXTGEN TV services.
Broadcast Positioning System (BPS)
Tariq I. Mondal, Vice President, Advanced Technology, NAB
Broadcast Positioning System (BPS), an ATSC 3.0 application, will get serious attention from the government agencies who are trying to solve the GPS vulnerability and dependency problems for civil GPS use. I am hopeful that the U.S. government will fund a BPS market trial and evaluate its performance in 2024. A few broadcasters will deploy the system in test markets, and the ATSC 3.0 equipment manufacturers will take notice of this potential opportunity. BPS will also get traction in South Korea in 2024 in terms of research and development. U.S. TV broadcasters will consider BPS to be a strategically important application to advance ATSC 3.0 deployment.
The advent of streaming had many pundits predicting the end of broadcast television, but the ongoing transition to ATSC 3.0 shows that NextGen TV is on the rise. What’s more, legacy broadcast series have remained among the most popular content on streaming platforms worldwide. Learn about the latest broadcast tech and trends as well as what the future holds for over-the-air TV with the expert knowledge and insights you need from this hand-curated series of articles from NAB Amplify:
Media universe cartographer Evan Shapiro examines the pivotal trends disrupting traditional business models in the new user-centric era.
January 1, 2024
Evan Shapiro: What New Gaming Research Reveals (You Actually Would Be Surprised)
TL;DR
Gaming is a $200+ billion annual business. Yet, despite the pervasive presence of digital gaming in our lives, there are many misperceptions and misunderstandings about the global gaming market especially in the media.
A new report analyzed by Evan Shapiro highlights some nuances that the rest of M&E should note: Older demos are playing a lot more than you might think, and females make up a sizeable target audience.
While free mobile games are the most popular and make for an attractive advertising target, in-game purchases are huge revenue generators and the popularity of subscription streaming is on the rise among younger players.
In media and entertainment, gaming usurped film as king a number of years ago — and its growth continues to astound. Even as Hollywood acknowledges the deep-seated popularity of titles like “The Last of Us” and “Super Mario Bros.” by increasing the number of game to screen conversions knowledge of gamer demos and habits are not as well known and stereotype attitudes remain.
“Notably, the gaming community may look different than how you perceive it — older, more female, and far less console-based than how conventional wisdom usually paints modern gamers,” says media analyst Evan Shapiro, who wrote up a survey into the U.S industry compiled by Publishers Clearing House.
“Despite the pervasive presence of digital gaming in our lives, there are many misperceptions and misunderstandings about the global gaming market — even and perhaps especially in the media,” Shapiro judges.
Gaming by the Numbers
Let’s start with some broad figures.
PCH values the global market at $200 billion which is a pretty accurate figure if we compare it to a couple of others. Analysts at gaming research specialists Newzoo have the global games market generating revenues of $187.7 billion in 2023 rising to $212.4 billion in 2026. Another report published mid-2023 by PwC expects total gaming revenue to rise from US$227 billion this year to north of US$312 billion in 2027, representing a 7.9% CAGR.
By contrast, receipts from movies worldwide totalled less than $100 billion in 2022 and is estimated to grow to $169.62 billion by 2030, according to Zion Market Research. Very approximately, the gaming industry is a third bigger than movies by revenue.
There are 3.6 billion gamers in the world according to PCH and 3.4 billion according to Newzoo. Both analysts concur that the majority of gaming takes place on the world’s most ubiquitous screen, the mobile phone.
While games like Madden, FIFA, Call of Duty and Fortnite get a lot of buzz, puzzle games such as Wordle, Candy Crush, and Fishdom, dominate game-play for U.S. adults, notes Shapiro. In July 2023, mobile puzzle game Royal Match had 16 million downloads.
“Mobile’s dominance with adult gamers explains why most gaming revenue comes from mobile,” he says.
In terms of sheer numbers, mobile gaming will continue to dominate the gaming landscape in terms of consumer numbers and spending. Per Newzoo, mobile accounts for nearly half of the global market.
Yet the analyst points out that growth has been hampered this year as developers and marketers had to revise their strategies amid privacy-related challenges. Nevertheless, in the coming years, it believes that mobile developers will adapt to the new regulatory landscape and that mobile will enjoy its first hundred billion-dollar year in 2026.
The PCH survey polled 68,000 American gamers over 25 about their gaming habits, devices, formats, genres, and spending. Where it scores is in breaking down the age and demos of those playing.
It found that, contrary to popular perception, digital games are played regularly by all ages 55 and below. Certainly three-quarters of the 25-34 demo are gamers, but so are nearly two-thirds of adults aged 35-44, more than half of 45-54 year-olds, and 40% of adults over 55.
The days of gaming as a male dominated bedroom recreation are also long gone. PCH is just the latest research to find that women are a hugely significant target audience. Per the report, younger men game at a slightly higher rate than younger women. However, that flipflops when considering the 55 and up demographic; however, a materially larger share of women over 55 say they regularly game.
Perhaps unsurprisingly, more than three times as many regular gamers play free games than play paid or a mix of both. Two thirds of American women prefer free games over paid, and men prefer free games twice as much as paid.
“Paid game use is primarily driven by younger players,” Shapiro writes. “This reinforces findings from a number of our surveys that younger consumers are more comfortable paying for content than their elder peers.”
The size and intergenerational make-up of the free gamer community makes it an attractive audience for advertisers and is why Shaprio suggests that’s why in-game marketing is one of the fastest growing sectors in advertising.
Free gaming may dominate adult game-play, but that doesn’t mean playing is free. In-game purchases are a sizable driver of gaming economics. These can come in the form of virtual objects, extra lives, or loot boxes.
Per PCH, 41% of 45-54 year olds say they make in-game purchases, and one-fifth of gamers over 55 buy lives, objects, or other stuff in the games they play.
“No game is truly free,” notes Shapiro.
When it does come to paid games, it is subscriptions which are growing fast. PlayStation+, for instance, has around 50 million paying subscribers worldwide. Gamers stream billions of hours of gaming related content each year, but the market is dominated by younger players 25 and under.
“The battle royale for gaming subs is amongst the biggest gaming companies in the business, and some of the largest most powerful companies on the planet,” says Shapiro. “This makes for a hard-fought, fragmented marketplace.”
Considering more than 50 billion hours of gaming content will be live-streamed in 2023, it’s safe to say that gamers under 24 stream at higher rates than all these demos.
“Something I found quite surprising is that in gaming livestream, YouTube Live is bigger than Facebook Gaming and Twitch, combined.”
Those in more traditional film and TV should take note, Shapiro says. “The majority of adults with growing expendable income, are spending more and more of their free time and money in games, and advertisers are spending more and more of their money on gaming platforms.
Most adults in the US are gamers. And at least half of those gamers are women. As the PCH data shows, much about the gaming market is unexpected or even counter to conventional thinking.
Hollywood is commissioning more video game adaptations than ever, but the video game industry isn’t necessarily flocking to Tinsel Town.
January 4, 2024
Posted
January 1, 2024
AI’s Authenticity Problem
BY ROGER J. KREUZ, Associate Dean and Professor of Psychology, University of Memphis
When Merriam-Webster announced that its word of the year for 2023 was “authentic,” it did so with over a month to go in the calendar year.
Even then, the dictionary publisher was late to the game.
In a lexicographic form of Christmas creep, Collins English Dictionary announced its 2023 word of the year, “AI,” on Oct. 31. Cambridge University Press followed suiton Nov. 15 with “hallucinate,” a word used to refer to incorrect or misleading information provided by generative AI programs.
At any rate, terms related to artificial intelligence appear to rule the roost, with “authentic” also falling under that umbrella.
AI and the Authenticity Crisis
For the past 20 years, Merriam-Webster, the oldest dictionary publisher in the U.S., has chosen a word of the year — a term that encapsulates, in one form or another, the zeitgeist of that past year. In 2020, the word was “pandemic.” The next year’s winner? “Vaccine.”
“Authentic” is, at first glance, a little less obvious.
According to the publisher’s editor-at-large, Peter Sokolowski, 2023 represented “a kind of crisis of authenticity.” He added that the choice was also informed by the number of online users who looked up the word’s meaning throughout the year.
The word “authentic,” in the sense of something that is accurate or authoritative, has its roots in French and Latin. The Oxford English Dictionary has identified its usage in English as early as the late 14th century.
And yet the concept — particularly as it applies to human creations and human behavior — is slippery.
Is a photograph made from film more authentic than one made from a digital camera? Does an authentic scotch have to be made at a small-batch distillery in Scotland? When socializing, are you being authentic — or just plain rude — when you skirt niceties and small talk? Does being your authentic self mean pursuing something that feels natural, even at the expense of cultural or legal constraints?
The more you think about it, the more it seems like an ever-elusive ideal – one further complicated by advances in artificial intelligence.
How Much Human Touch?
Intelligence of the artificial variety — as in nonhuman, inauthentic, computer-generated intelligence — was the technology story of the past year.
At the end of 2022, OpenAI publicly released ChatGPT 3.5, a chatbot derived from so-called large language models. It was widely seen as a breakthrough in artificial intelligence, but its rapid adoption led to questions about the accuracy of its answers.
The chatbot also became popular among students, which compelled teachers to grapple with how to ensure their assignments weren’t being completed by ChatGPT.
Issues of authenticity have arisen in other areas as well. In November 2023, a track described as the “last Beatles song” was released. “Now and Then” is a compilation of music originally written and performed by John Lennon in the 1970s, with additional music recorded by the other band members in the 1990s. A machine learning algorithm was recently employed to separate Lennon’s vocals from his piano accompaniment, and this allowed a final version to be released.
Advances in technology have also allowed the manipulation of audio and video recordings. Referred to as “deepfakes,” such transformations can make it appear that a celebrity or a politician said something that they did not — a troubling prospect as the U.S. heads into what is sure to be a contentious 2024 election season.
Our judgments of authenticity are knee-jerk, he explained, honed over years of experience. Sure, occasionally we’re fooled, but our antennae are generally reliable. Generative AI short-circuits this cognitive framework.
“That’s because back when it took a lot of time to produce original new content, there was a general assumption … that it only could have been made by skilled individuals putting in a lot of effort and acting with the best of intentions,” he wrote.
“These are not safe assumptions anymore,” he added. “If it looks like a duck, walks like a duck and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.”
Though there seems to be a general understanding that human minds and human hands must play some role in creating something authentic or being authentic, authenticity has always been a difficult concept to define.
So it’s somewhat fitting that as our collective handle on reality has become ever more tenuous, an elusive word for an abstract ideal is Merriam-Webster’s word of the year.
An increase in creativity will be AI’s biggest impact on the creator economy, say 30% of content creators polled by video platform Artlist.
December 26, 2023
The Polarized Perceptions of Our AI Future
TL;DR
The philosophical debate about the future of AI is between those “doomers” who only see dystopia and death of the human race and “accelerationists” who preach the gospel of AI utopianism.
Even if this was not the crux of the corporate fallout at OpenAI, the discussion is worth exploring because those positions are hardening on either side.
Perhaps there is hope for a middle way where the majority of those in between the extremes can come to collective agreement about the way forward. Or is that too liberal a view?
Does the recent chaos at OpenAI shine a spotlight on the only debate about AI that matters: Its existential threat to humanity?
Thom Waite explores this: Depending on who you ask, the future of humanity — in a world populated by extremely intelligent machines — looks very different.
On one hand, you have techno-utopias, where AI has solved all of humanity’s most difficult problems, from the climate crisis, to interstellar travel, and disease. On the other, you have scenes from a Terminator-style timeline where Skynet has been built and we are all slaves to the machine.
OpenAI’s Theoretical, Existential Crisis
Could this raging philosophical debate be at the heart of the schism that ripped apart OpenAI a few weeks ago and saw CEO Sam Altman dramatically ousted — and then welcomed back?
Even if the disagreement (albeit a seemingly fundamental one) was about matters more prosaic (stocks and shares, maybe) it is worth pursuing where this leads.
One reading of the summary dismissal was that the OpenAI board has a commitment that states its “primary fiduciary duty is to humanity.” In other words, if it sees something that might harm humanity, it’s allowed to make any necessary leadership changes to keep the threat contained.
Superficially, that’s a bit like Google’s (now Alphabet) ex-corporate motto “Don’t Be Evil,” which has always had more marketing spin about it than any corporate social policy (such as syphoning user data from its platform and Android OS to sell ads).
Perhaps Altman and his researchers had made some sort of breakthrough, which made artificial general intelligence likely sooner rather than later. AGI is the type of super-AI that matches human thinking so exactly that there is in fact no discernible difference (ergo, what is the point of humans). There were rumors of this, but these have died on the vine since Altman went back to the lab.
Nevertheless, as Waite puts it, it is easy to see how the developments have reignited the debate about the future of AI, with “doomers” at one end of the spectrum and believers in “effective accelerationism” at the other, preaching a version of AI utopianism.
Waite assays what each party believes.
Doomers vs. Accelerationists
Unsurprisingly, “doomer” is a label for people who believe in a high probability that AGI will be a bad thing. high p(doom) – in other words. As a result, they often advocate slowing down AI development, or even putting it on pause until guardrails can be put in place.
Emmett Shear, the former chief executive of livestreaming site Twitch and the short-lived interim CEO of OpenAI when Altman was fired, dubbed himself a “doomer” earlier this year.
According to Waite, members of the board who were instrumental in the recent shake-up have also expressed deep concerns about the future of the technology, which sets the battleground for the supposed conflict.
As is obvious by their moniker, accelerationists want AI development to ‘Go Go Go’. Influential entrepreneurs like Marc Andreessen are in this camp along with other tech evangelists who last year were motoring on about the greater good of Web3, NFTs and crypto. Or those with a quest for bodily immortality (like Peter Thiel) or those actually serious about fusing their consciousness into the network in a singularity (which didn’t end well for Johnny Depp’s scientist in Transcendence).
The movement isn’t monolithic, of course. Some believe that it’s important to achieve AGI as soon as possible because it will usher in a post-scarcity society, radically improving people’s living conditions across the globe and, at its core, reducing humanity’s net suffering.
Others argue that it’s not about reducing human suffering at all. Says Waite, “They say that society’s only responsibility is to build superior beings that can take our place and spread their superintelligence throughout the universe. In this scenario, the survival of the human species is irrelevant.”
This type of thinking borders on the cult and, as Waite points out, doesn’t have many adherents.
The debate maybe flippantly presented here, but it matters because the dividing lines are hardening.
“To some extent, this makes sense,” Waite muses. “If you truly believe that AI can right all of humanity’s wrongs, find cures for diseases, save us from climate catastrophe, and bring about an era of abundance — as the most ardent accelerationists do — then it’s basically a moral imperative to make sure it happens as soon as possible.”
Like fundamental anti-abortionists who believe that every sperm is sacred, anyone standing in the way would, hypothetically, have millions of deaths on their hands.
On the other extreme are those who have perhaps watched/read too many dystopian sci-fi stories and believe that machines will not only gain intelligence but doing so will inevitably mean the end of us and our outmoded muscle-and-bone technology.
“If you believe this, then the critical mission is to stop development, or at least slow it down until we can work out how to do it safely,” Waite notes. “OpenAI reinstating Sam Altman is considered by many to be a failure of this mission, since it appears to override the original aims of the company’s board – to protect humanity from the worst consequences of a rushed AI system.”
Naturally, there’s a middle ground, so there’s hope that never the twain can meet. This belief stems from a broader reading of human history, which understands the survival of our species to be inextricably linked to new technology.
(The jump cut between an ape smashing bones in a tantrum to a spaceship orbiting the moon can serve as a useful metaphor. We use tech to advance.)
… And Somewhere in the Middle
It’s also the case that in the middle of these two polarized camps lie the vast majority of AI researchers and billionaire funders.
“Hopefully, they can take some of the arguments from both sides and work out how to get the best out of the technology while limiting the damage it might cause, through measures like industry regulation and international safety deals,” says Waite.
Though this, to me, stray,s a little too much onto the side of the argument that says AI is just a tool and it is how we use it that counts, for good or for evil, but that good will prevail.
Marc Andreessen’s “Techno-Optimist Manifesto” advocates that we should all view technology as a force for pure good. People have concerns.
March 21, 2024
Posted
December 26, 2023
Is It Technology Dread or Imminent Apocalypse? (Both?) Asking Sam Esmail
TL;DR
Writer-director Sam Esmail discusses his new film, “Leave the World Behind,” which sees Julia Roberts, Ethan Hawke and Mahershala Ali facing the end of the world.
The film makes a strong commentary on society’s dependence on technology, which is only going to grow as we continue to incorporate AI into our lives.
Esmail includes cheeky digs at Tesla and at Netflix, the studio that funded the film.
From the dystopian science fiction thriller Mr. Robot through Amazon’s military mystery Homecoming to his new film, Leave the World Behind, writer-director Sam Esmail’s thematic obsession is the impact of technology on society.
The film examines the reliance we have on technology as an apocalyptic series of events cuts off all communication.
“I’m not a technophobe,” Esmail insists in a Google Talk moderated by Josh Lanzet. “I think technology is agnostic, it has no morality to it. It’s the human side that I’m more fascinated with. I really do think that it’s our sort of complicity, or how we use tech that will, in the case of the film, kind of offer a cautionary tale of what could happen to our world if we go one way or the other with it.”
Can we can still have sort of a functioning community without technology? he asks.
“Ultimately, technology, is a double-edged sword,” he said during an interview on the RealBlend podcast produced by CinemaBlend. “When I think about… the positives, it gives us access to information, to people, to media, to content that we want to explore. I think it’s a tool like anything else [and] it’s what we do with it.”
Based on Rumaan Alam’s 2020 novel, Leave the World Behind is set mainly in a country house outside of New York City, where a couple played by Julia Roberts and Ethan Hawke travel with their children, for a weekend getaway. On their first night there, two strangers arrive at the door: played by Mahershala Ali and Myha’la who declare they are the owners of the house and ask to be let in, citing a blackout in the city. Distrust and paranoia ensues as Esmail uses the tropes of the disaster movie to explore relationships of race and class.
The Towering Inferno, Earthquake and The Day After Tomorrow were among influences but the idea that touched a nerve was about how people can lose sight of their common humanity in the face of a crisis.
“It’s pretty relevant today given what’s going on in the world,” Esmail told Matt Zoller Seitz at Vulture. “The other thing that interested me is that this book does the inverse of what a typical disaster film does. The disaster elements tend to be the center of the story in disaster films. The characters tend to be secondary. Here, I could invert that process and be with the characters and have the disaster element exist more in the distance. That instantly felt more authentic to how humans would experience a crisis like that.”
Esmail read the book during lockdown when the idea that people can easily lose sight of their common humanity in the face of their own danger was all too real.
“But prior to reading the book I had this idea percolating in the back of my head about trying to construct a sort of disaster thriller centered around a cyberattack,” he told Brenna Ehrlich at Rolling Stone. “Because I think cyberattacks — even though they’re out in the public consciousness — there’s something ominous but equally mystifying about them.”
Classic paranoia thrillers like The Parallax View and North by Northwest were other touchstones, the latter providing inspiration for a scene in which Mahershala Ali’s character runs from a crashing plane.
“It’s not very subtle,” Esmail admits to Rolling Stone. “In all honesty, I don’t think there’s a movie made in contemporary times that doesn’t show some influence by Hitchcock. I think he’s essentially invented modern-day film grammar, but clearly, his work was looming large over the film.”
We also learn from Vulture that Esmail cast Ali in part because he thinks of the actor as a modern day Hitchcockian leading man. “The prototype was Cary Grant or Jimmy Stewart in Hitchcock’s films. They are an Everyman. They’re not five steps ahead, like a superhero, but they’re half a step ahead. They’re savvy enough to size up any situation. Mahershala has that.”
The director also talks about the cinematography of Leave the World Behind, in particular the camera movement that seems to move through the architecture similar to The Shining, and another iconic Hitchcock film.
“That was a huge influence,” he admits, talking about Kubrick’s psychological horror film on the ReelBlend podcast. “I love big camera moves, especially when it’s relaying something the audience doesn’t know. It’s like what you’re saying: It’s almost as if the movie’s a little possessed, and you’re the demon looking down at those people.
“It’s that great shot in Rear Window: Jimmy Stewart’s asleep and the camera’s moving, and then you’re looking across the street seeing the thing he’s not seeing, and then you realize, “Wait a minute — who am I? What’s happening? Who’s seeing it?” It’s very unsettling. Ever since I saw that film as a kid, I’ve always loved the idea of a camera being its own sort of person.”
Esmail’s script exhibits an eerie synchronicity with current events. For instance he made the movie when conflict had not yet escalated in the Middle East. Yet there’s a startling scene where Ethan Hawke’s character is being pursued a drone that drops leaflets written in Arabic that say “Death to America” — and later, another character who heard about similar messages, this time in Korean.
“Honestly, I tried to follow the guidelines out of the playbook of how coup d’etats actually work, especially when it’s a foreign actor,” he told Rolling Stone. “Propaganda misinformation is an old tactic. I just took that and magnified it and heightened it to this situation. It plays on your own biases and your own beliefs about who our enemies are, and I always love it when you can remove the barrier between the audience and your protagonist.”
Turning on Tech
Another scene features a number of Teslas that turn on their self-driving functions to block the roadways. Esmail says he didn’t seek permission from Elon Musk for that.
“Look, I wrote it in the script. I asked my amazing props guy, Bobby, to bring a bunch of Teslas out on the street. We shot the scene. I edited it in post, I showed it to Netflix, I crossed my fingers. And to this day, no one has said anything to me. So yeah, I’m hoping the movie comes out and no one will say anything.”
What doesn’t get lost in a digital attack are physical media like vinyl, DVDs and VHS (though you’d still need a source of electricity to play them). These become a source of comfort and nostalgia towards the end of the picture. But how did that sit when making the movie for a streaming service?
Esmail wasn’t afraid to poke the hand that feeds. On the one hand, he claims to be a “great proponent” of physical media, but also explains that one of the advantages of streaming services like Netflix “is that you really have access to any movie from across history at your fingertips,” he says.
“So there’s, there’s always a conflict because I’m a proponent of theatrical. I’m a proponent of DVDs and Blu-rays. But I’m also not mad at a streaming service that lets me see all the classics at a moment’s notice.”
Nonetheless he includes a cheeky shot that he doesn’t think “the Netflix folks” have noticed: “In the very end, you see Rose’s thumb hovering over the remote, and it goes past the Netflix button to hit ‘play’ on the DVD player.”
Notes From a Former President
The film’s exec producers are Netflix stablemates Barack and Michelle Obama, who were more involved in production then lending the cachet of their name.
“He’s a huge movie lover and a huge fan of the book,” Esmail confides to ReelBlend. “He really was committed to making this into a great movie. And he was giving me notes at the script stage, multiple drafts, including, post rough cuts. It’s kind of a surreal because I do think he is one of the most brilliant minds on the planet and to get his insight on the disaster element, characters, the theme. It was the highlight of my career.”
Turns out Barak Obama is a fan of Mr. Robot to the extent that Esmail got a call from the White House when Obama was President.
“We were in the middle of second season, and it hadn’t aired yet. And we were cutting the episodes. And someone from the White House, contacted our office and said, he’d love to get rough cuts of the episodes. Imagine that.”
“Oppenheimer” director Christopher Nolan says his new film offers lessons on the “unintended consequences” of technology.
January 10, 2024
Posted
December 19, 2023
What Comes Next for the Creator Economy? (Um, Apart from That $480 Billion)
TL;DR
A national poll identified 27 million people, or 14% of 16 to 54-year-olds, working as “influencers” in the US economy.
The creator economy could be a $480 billion industry by 2027 as it continues to grow a sizable business ecosystem around social media stars.
Yet traffic and wealth tends to be concentrated among a very few creators with less than 10% of full-timers earning a decent wage and the vast majority making less than $2,000 a year.
The latest innovation driving the creator economy forward is artificial intelligence.
While there is near universal agreement about the growing size and importance of the creator economy, estimates vary widely. For example: Citi estimates there are more than 120 million content creators generating $60 billion of revenue, a figure which it estimates is growing at about 10% per year.
Goldman Sachs research has a very different estimate, saying the total addressable market of the creator economy could roughly double in size over the next five years to $480 billion by 2027 from $250 billion today.
Meanwhile, it estimates there are presently 50 million global creators, growing at 10-20% per year — far less than Citi.
In a national poll of 5,854 Americans market researcher Keller identified 27 million people, or 14% of 16 to 54 year olds, working as “influencers” in the US economy.
However, there is consensus that growth has not stopped and will be driven by investment in influencer marketing and the rise of ad-revenue-share models, particularly in short-form video on platforms like Instagram, TikTok, and YouTube.
As Goldman Sachs puts it, creators earn income primarily through direct branding deals to pitch products as an influencer; via a share of ad revenues with the host platform; and through subscriptions, donations and other forms of direct payment from followers. Brand deals are the main source of revenue at about 70%, according to its data.
eMarketer’s Insider Intelligence forecasts that in 2024, US influencer-marketing spend will hit $5.89 billion, and that its growth will “remain in the double digits through 2025.“
“The funds are not drying up anytime soon and we are seeing more and more people becoming creators,” Shannae Ingleton Smith, president and CEO of Kensington Grey Agency tells Amanda Perelli at Business Insider. “It’s a viable career space and in many cases pays more than the top tech jobs. Where the advertising dollars are, to me, is a great indication of sustainability.”
Since its inception in the mid-2000s the creator economy has also grown to encompass a range of professionals who work for creators. These range from managers to video editors, as well as tech execs who have built platforms and companies to help creators make money and build audiences.
She interviews Kate Lingua-Marina, a creator known by her handle @SiliconValleyGirl, who explains that she made my first video in 2014 while applying to universities in the United States. She decided to document her journey — and her views exploded to the point that she now has three YouTube channels and a vlogging channel.
“I used to film everything myself,” Lingua-Marina says. “These days I have videographers who helped me from time to time depending on the type of content that I’m creating. I have several editors to help me with editing. If someone helps me post on platforms. I have a manager who’s responsible for working with brands.”
But not everyone can be a MrBeast. In fact, no one should be mistaken that becoming an influencer is an easy way to make money.
Only about 4% of global creators are deemed professionals, meaning they pull in more than $100,000 a year, finds Goldman Sachs.
A recent survey of 689 creators by the influencer-marketing platform Mavrck found about 51% made less than $500 a month. In the survey, nearly a quarter of creators said they earned more than $2,000, and about 4% said they earned more than $10,000 per month.
Keller’s research concluded that 6% of Americans full-time creators and earn an average of $179,000 per year but that the average income is $93,000 per year. More than half of creators make less than $10,000 annually and a third only make $2,000.
“While the livelihood of the 11.6 million full-time creators (in the States) is a robust $179K/year, the total number of creators is larger than most estimates, likely based on the one third of them who earn less than 2K a year,” the researcher notes.
The creative economy is also facing mixed financial signals. After a flat 2022 YouTube ad revenues were up around 5% by the third quarter of 2023. Creators received just over half of the ad revenue generated on their channels. On the other hand, investment in the creative economy has dropped sharply with total funding for us startups fell 50% last year compared to 2021.
Revenue and funding going into platforms has decreased quite dramatically.
Criddle says, “One key problem for the creator economy is that creator traffic and wealth tends to be concentrated among the very few, such as MrBeast. Only 4% of creators are defined as professionals earning at least $100,000 a year.”
While the creative economy might be moving away from past explosive growth, there is evidence consumers remain willing to pay for quality content.
“The days of wild growth might be over or at least on hold but that’s not going to stop the millions of creators out there,” Criddle says. “There is enough demand, enough supply and now is the time when the focus should shift from quantity to quality.”
AI Comes to the Creator Economy
The latest innovation driving the creator economy forward is artificial intelligence.
This year, YouTube unveiled new AI tools and features aimed at simplifying content creation. According to Business Insider, the industry is betting on AI not to replace creators, but to increase productivity and bring more opportunities for people to make content.
Rising AI startups in the creator economy like Crate, an AI platform helping creators streamline the creative process, and Midjourney, an AI model that can generate images, are winning over investors.
Keller’s survey found half of Creators saying they want to start working with AI and that virtual reality/augmented reality is #2 on their list of tech they’d like to engage with in the future.
In a recent survey of 2,000 influencers by membership platform Creator Now, 90% said they were using ChatGPT during the content creation process, and 31% said they were using Midjourney. The top reason cited for using AI was to increase the speed of content creation. AI tools can edit TikTok or YouTube videos in a fraction of the time it takes today.
“AI is a game changer,” says a creator speaking to the Financial Times. “The first time we used it was to create a script. I had to change some things, but it was right there in front of me in 60 seconds.
“If I create an AI version of myself, if AI create scripts, then my job is to decide which content goes out there and which topic my AI prototype is talking about. Good creators are becoming producers.”
She expects that artificial intelligence tools will drive the professionalization of the influencer class. Moore explains, creators are “going to amplify their creativity to a new level. They’re going to come back with insights into the audience and say, ‘I’m going to set you apart.’ The ones who are having fun, making a little passive income — they’re going to find it harder to create.”
The creator economy emerges as a powerful force in the digital marketing landscape, reshaping the way brands connect with consumers.
December 20, 2023
Posted
December 19, 2023
Evan Shapiro Amplified: What to Expect for M&E in 2024
Watch “Evan Shapiro Amplified: What to Expect for M&E in 2024.”
TL;DR
“Evan Shapiro Amplified: What to Expect for M&E in 2024” examines the pivotal trends disrupting traditional business models in the new user-centric era, and provides actionable strategies for industry players to adapt and thrive in a rapidly changing media ecosystem.
The advertising landscape within the Media & Entertainment industry is poised for transformation as we venture into 2024, says Shapiro, with digital media and connected TV at the forefront.
User-generated content is increasingly being considered on par with premium content in terms of quality and effectiveness for advertisers, Shapiro says.
Mergers and acquisitions will continue to be a major trend, with Big Tech companies using these strategies to solidify their hold on the media ecosystem and reduce competition.
The streaming wars have morphed into the “battle of the bundles,” as companies like Amazon and Alphabet create comprehensive service packages that cater to the evolving hierarchy of consumer needs.
As 2024 approaches, the Media & Entertainment industry stands at the dawn of a new user-centric era, shaped by relentless innovation and shifting consumer dynamics. In this ever-evolving landscape, media universe cartographer Evan Shapiro dissects the pivotal trends disrupting traditional business models and provides actionable strategies for industry players to adapt and thrive in the rapidly changing media ecosystem.
Our latest installment of Evan Shapiro Amplified, “What to Expect for M&E in 2024,” delves into Shapiro’s predictions for the upcoming year. His insights, rooted in a deep understanding of industry trends and consumer behavior, offer a glimpse into what 2024 holds for media professionals — from the rapid shift to digital in advertising and evolving measurement challenges, to potential big-name mergers and acquisitions, strategic service bundling, and Big Tech’s growing dominance. Watch the full discussion in the video at the top of the page.
The Digital and Connected TV Advertising Revolution
The advertising landscape within the Media & Entertainment industry is poised for transformative change as we venture into 2024, says Shapiro, with digital media and connected TV (CTV) at the forefront.
“From an advertising standpoint, this year was the year that digital and connected television caught up,” to and likely even surpassed traditional television on a month-by-month basis, he says.
He notes the significant shift in advertising revenues, highlighting the growing dominance of digital platforms. “But when you look at where the money is going, it is not going back to where it was pre-lockdown,” he emphasizes. “The money that’s been sitting on the side isn’t going to the same place as it was distributed to equally in 2019. It’s going to new places, and fewer places.”
Another shift, says Shapiro, is that user-generated content is increasingly being considered on par with premium content in terms of quality and effectiveness for advertisers.
“Creator-led content is increasingly moving to the big screen, and the two ecosystems are commingling on connected televisions,” he says. “The ad buyers themselves now see creator-led content on par from a quality and environment and — crucially — efficacy standpoint, as they do professionally produced Hollywood content.”
It’s important to understand “that most creator-led content actually is professional content,” Shapiro continues. “The people who create the most successful creator-led content are professional creators, often within larger ecosystems.” This perception has led audiences to shift to connected TV platforms like YouTube, which is now the single largest channel on TV, he notes, “but so are the ad dollars because both communities now see that as the destination for their resources.”
Big Tech Continues to Dominate in 2024
The dominance of Big Tech companies in M&E is expected to deepen in 2024, Shapiro predicts. He describes these entities as “trillion-dollar Death Stars,” strategically employing mergers and acquisitions to reduce competition in the marketplace. This trend, he underscores, transcends mere power consolidation, aligning closely with the strategic service and product integration seen in the Amazon Prime model.
“This year has been pretty tumultuous,” Shapiro notes, pointing to the significant increase in job losses within media and tech at a scale not seen since the height of the lockdown. “Part of that is just a reassembling of the industry itself. And part of that is companies preparing themselves for sale.”
Looking ahead, Shapiro anticipates several substantial mergers and acquisitions in the M&E ecosystem. “I think a lot of companies that find themselves at a fraction of their former valuations are going to be open to being subsumed by major corporations, I think Big Tech will solidify its hold on the media ecosystem, through either acquisitions or through, basically, the reduction of competitive forces in the marketplace.”
As a result, he warns, “these combinations are going to create even more job losses in 2024 in the media ecosystem.”
But in an increasingly Big Tech-dominated world, Shapiro reminds us, “content is still king. Big Tech is the throne, but content will remain at the heart of the ecosystem.”
The media ecosystem “can’t survive by Big Tech alone,” he explains, a fact that Big Tech companies already understand all too well. “You have to understand the rules of the game, and you have to understand what your leverage is in your ecosystem, your ability to deliver a very specific audience to all the platforms that you’re going to perform your content across.”
The Battle of the Bundles
The streaming wars have morphed into what Shapiro calls “the battle of the bundles.” This shift towards a more integrated approach is best exemplified by Amazon Prime, which aligns its media services to serve the diverse and evolving needs of consumers and their “hierarchy of feeds.”
The supremacy of the traditional “triple play” bundle of phone, cable TV, and internet is over, Shapiro declares. “We need to replace the high-margin value system created around video as a marketing hook for the triple play. And the best way to do that is Amazon Prime,” he advises.
“The companies that are able to do this are going to win in the user-centric era where the competition is for total attention, and total mindshare.”
Alphabet serves as another model of successful consumer engagement, Shapiro points out. “They’re currently operating the only growing MVPD with YouTube TV,” he notes, highlighting the company’s leading position as the top video platform on both mobile and connected TV, in addition to being one of the world’s top three music services and possessing “a great sports strategy with NFL Sunday Ticket.”
Challenges and Opportunities for 2024
As the Media & Entertainment industry marches into 2024, it faces a landscape rife with both challenges and opportunities to thrive in the user-centric era. Video games, in particular, shows potential for explosive growth.
“One of the fastest growing segments in the ad economy right now is in gaming,” Shapiro notes. “Most of that is mobile,” he continues, pointing out that a new Publishers Clearing House survey of 68,000 consumers found that more than of half of people over 25 are gamers, and game either daily or multiple times per week. “Most of those are on mobile, most of those are women. And most of the revenues generated in the mobile gaming ecosystem come from ad dollars. It’s a very effective ad environment and it’s growing very quickly.”
Shapiro advises media companies to look for extensions on the business they’re already doing with new forms of revenue and commerce. “Not only what your merchandise is, but how do you get rewarded for the products you’re selling for your ad partners,” he says. “So there’s a ton of different things you can do. But most of them aren’t necessarily stealing more share from your neighbor. A lot of times it is growing the revenue per user within the universe that you currently have.”
Media universe cartographer Evan Shapiro examines the pivotal shift to a user-centric era of media, supported by new consumer research.
January 5, 2024
Posted
December 18, 2023
How Erik Messerschmidt Post-Produced His Cinematography for “The Killer”
TL;DR
Cinematographer Erik Messerschmidt details his latest collaboration with David Fincher on “The Killer,” featuring Michael Fassbender as an assassin with sociopathic personality traits and an attention to detail that leaves nothing to chance.
Featuring an avocado-colored LUT, exquisite scene management and meticulous coverage, Fincher’s edict for “The Killer” was always to control the pace.
Multiple Paris interiors were constructed on sound stages in New Orleansin a series of three-walled sets that the actors were able to walk through.
The production shot practically whenever the outcome could be controlled, but lens flares and other digital effects were created during post-production.
A momentous fight sequence appears to have been captured using handheld cameras, but the footage actually had de-stabilization applied in post.
NAB Amplify caught up with cinematographer Erik Messerschmidt just as he was about to fly to the Camerimage film festival in Poland, where Ferrari, his first film with director Michael Mann, was in competition. “An extraordinary experience, once in a lifetime,” was his on-the-spot reaction to the question, “How was it?”
But we wanted to talk to him about The Killer, a Netflix movie with an appearance in selected theaters at very selective times. Most people would wait for the stream and live with the Internet compression artifacts for the treat of a Fincher film, this time about a man who kills for a living. Cue Michael Fassbender, with sociopathic personality traits and an attention to detail that leaves nothing to chance; some reviews suggested that this man was a depiction of Fincher himself.
If you have seen previous films or television shows from the Fincher/Messerschmidt duo, especially 2018’s Mindhunter, you would be in a comfortable place from the get-go of The Killer: An avocado-colored LUT, exquisite scene management, and meticulous coverage. “Is this all you?” the DP was asked.
“It’s a thing that David and I do together. I enjoy the process of camera direction; I view it as sort of my principal job, really. It’s thinking about the structure of the film and of each scene. Every director’s interaction in terms of coverage and camera direction is different. It’s the first thing David and I discuss: structure and pacing. It’s almost an editorial conversation in terms of what we’re going to provide Kirk [Baxter, the editor] and how each scene breaks down in terms of the pace,” he said.
“Then we watch the rehearsal, and I watch what he’s doing with Michael and what Michael’s doing. We look for the moments we need: the wide shots, the single setups, and what we need to address them. It’s quite scene-specific, but the edict on this movie was always to control the pace.”
When watching a Fincher movie, the joy is giving in to that control and mastery. For instance, when the killer is preparing for a hit, the pace differs from when it all goes wrong, and then that comforting LUT and algebraic camera direction deforms into something less exact.
“When we’re in the killer’s space, the camera is precise and classic. When his world falls apart and he’s no longer in control, then the camera follows suit,” says Messerschmidt.
But any comfort you may feel, especially in the first 20 minutes of the movie, is a ruse and preparation for the ride to come. Messerschmidt explains their methodology, “We’re not and never are pointing the camera at action. We’re — especially with David — quite concerned about using the frame to provide information for the audience. Those things can exist at the edge of the frame, at the center of the frame, and the relative depth sort of correlates to their importance,” he says.
“We’re quite cautious about the art direction and the composition of each shot with consideration about how we’re spoon-feeding that information to the audience. It’s like, ‘You need to see this now, so we’ll include it in the frame; you need to see this reaction, so it’ll be in a close-up.’ So you need to see what he’s looking at so you’ll be in a point of view,” the DP continues.
“That all comes from a place of blocking, really. The way that David sets it all up is quite holistic, and we now have a shorthand. I can see what he’s doing with blocking, so I can see the POVs and close-ups, so we’re generally in agreement.”
Without spoiling what comes next, the film stays with the killer almost all the time, but doesn’t empathize with him at any time. He is just a part of the tension that is Fincher’s most important gift to the audience.
In production terms, The Killer is as complicated as Fincher movies get. Ideas are suggested, and they are constructed or deconstructed differently if they don’t work. In the opening scenes, for instance, the apartment that the killer is watching isn’t real and was constructed thousands of miles away. There was just nothing in Paris that would work for the vision the director had.
Messerschmidt explains how they found the look they were after. “We’d gone to Paris looking for a location to do it all practically. We looked for a Penthouse apartment with a vantage point across the street that could be the killer’s lair, and we didn’t find it. We didn’t because we needed windows large enough to see all of this action clearly. The decision was made to build the apartment across the street; the final scene is an assembly of three different locations. The point of view when he’s looking out of the window at the cafe and all the actions on the ground is all real, and it is a square in Paris. The exterior facades are plates shot in Paris from the same vantage point. So we did a nine-camera setup looking out of this window that captured all that action and those plates. So we had matching light and light reference,” he details.
For the interior of the apartment, a set was built in New Orleans. “That’s where Michael’s action existed, and the window was real looking out to blue screen. The penthouse apartment where the target is was a build on stage as well on a different soundstage and it was a series of three-walled sets built together so the actors could walk through.”
During post-production, the set was placed on top of exterior plates that had already been shot, blending in the façade in front. “The facade was entirely digital, and the only thing that was real was beyond the windows — in fact, there was no glass either in those windows; all the glass was CG,” he says.
“We had to previsualize the action as they were all shot at separate times. The target’s movements had to be worked out because Michael’s points of view were all relevant to the edit.”
The telescopic sights were practical long lens shots, but Messerschmidt had the scope sent to post-production so they could see firsthand how the scope’s optics worked. “You then get that kind of warping around the edges, the drift of the crosshair and all the things that it really does.” This allowed the production team to capture the imagery the killer sees through the scope practically on a sound stage.
There’s no surprise that so much deconstruction is done on the movie. Fincher’s style has always been to find a way of getting the shots he wants, but this production is full of post-produced shots when doing it practically leaves too much to chance.
The lens flares employed for some of the street scenes came as a surprise to anyone who has watched Messerschmidt’s work with Fincher. He hadn’t done them before, and the flares looked different but beautiful. “Were they anamorphic flares?” we ventured to ask.
“I’m not a fan of anamorphic; in fact, I’ve never shot an anamorphic film; I’ve always shot spherical,” he replied. “I’ve shot some anamorphic commercials but have also been a bit disappointed, to be honest. It seems a little bit silly to shoot anamorphic with a digital camera. But I do sometimes like the qualities that anamorphic flares produce, but I never get them when I want them and get them when I don’t want them.”
There was some digital flare work in Mank, Messerschmidt said, “but it was very subtle. The Bell and Howell lenses of that era had particular flare characteristics that we wanted to copy. The CG artists got good at it, and I told David, ‘What if we play with that a little bit?’ So we would intentionally put bright things in the frame, practicals, or sun hits on roofs of cars where we would do an elaborate CG flare.”
Working in post-production, “I would mark it up and say that we needed a blue streak here, which should be very aggressive, and the guys would paint it in and make sure it was just right. It was an enjoyable experiment. It was cool to go in there and art direct them,” he said, adding, “I also rarely use diffusion on a lens, but there were moments in the Dominican Republic that I thought it would be interesting to try.”
For scenes the filmmakers wanted to appear very humid, they once again went digital, using a DaVinci Resolve plugin called Scatter. “It’s a bit of post-production cinematography. I would look to do that again. I think it’s all about control; I think the fear with that decision is that if you’re working with a team you don’t trust to implement it the way you want, it is not a fear I have with David. I’m pretty involved with that process,” he explains.
“There are certain effects that you have to do optically, but I’m not nostalgic about it like some people. I don’t believe you must be incredibly dogmatic about some things; it’s about the result.”
Veering back to what might be safer ground, and to more practical camera work, we inquired about the momentous fight scene towards the movie’s end, with its superb handheld camera work.
“That was quite an undertaking and a culmination of many people’s work, starting with the stunt coordinator. It was, of course, heavily choreographed and not something that was shot off the cuff. The thesis was that we wanted the audience to be geographically centered regarding understanding the space, so we didn’t deliberately disorientate them so they knew where we were in the house,” he said.
“Each room in the house has a distinct color palette that we key up in the beginning so you understand that you’re in the kitchen, gaming room, or bathroom. We go through the process of this fight, and we revisit all these spaces in reverse until he arrives back where he dropped the gun.”
However, Messerschmidt says, “there’s very little real handheld in that sequence. Almost all of that is post de-stabilization. It’s nice because we can go in there and art direct the level of shake by saying we can slow down a little bit here, quicker here. Sometimes I find that aggressive handheld is very hard to judge on the set and keep it consistent across five or six days of shooting that it took.”
For Messerschmidt and Fincher, this philosophy of only delivering the best experience for the audience is like a mantra. “It’s fun to see what we can do and get away with,” Messerschmidt concludes. “To be honest with you, anyone can shoot with a handheld shaky camera pointed at people fighting, and there’s a long history of success with that technique in action movies. There is a playbook for that. We wanted to see if we could do it differently.”
Someone tracking the conflict raging in the Middle East could have seen the following two videos on social media. The first shows a little boy hovering over his father’s dead body, whimpering in Arabic, “Don’t leave me.” The second purports to show a pregnant woman with her stomach slashed open and claims to document the testimony of a paramedic who handled victims’ bodies after Hamas’ attack in Israel on October 7, 2023.
Even though these videos come from different sides of the Israel-Hamas war, what they share far exceeds what separates them. Because both videos, though real, have nothing to do with the events they claim to represent. The clip of the boy is from Syria in 2016; the one of the woman is from Mexico in 2018.
Cheap But Effective Fakes
Recent headlines warn of sophisticated, AI-driven deepfakes. But it is low-tech cheap fakes like these that fuel the latest round of disinformation. Cheap fakes are the Swiss army knife in the propagandist’s tool belt. Changing a date, altering a location or even repurposing a clip from a video game and passing it off as battlefield combat require little know-how yet effectively sow confusion.
The good news is that you can avoid being taken in by these ruses — not by examining the evidence closely, which is liable to mislead you, but by waiting until trusted sources verify what you’re looking at. This is often hard to do, however.
In the largest survey of its kind, 3,446 high school students evaluated a video on social media that purported to show election fraud in the 2016 Democratic primary. Students could view the whole video, part of it or leave the footage to search the internet for information about it. Typing a few keywords into their browsers would have led students to articles from Snopes and the BBC debunking the video. Only three students — less than one-tenth of 1% — located the true source of the video, which had, in fact, been shot in Russia.
Your Lying Eyes
Why were students so consistently duped? The problem, we’ve found, is that many people, young and old alike, think they can look at something online and tell what it is. You don’t realize how easily your eyes can be deceived — especially by footage that triggers your emotions.
When an incendiary video dodges your prefrontal cortex and lands in your solar plexus, the first impulse is to share your outrage with others. What’s a better course of action? You might assume that it is to ask whether the clip is true or false. But a different question — rather, a set of related questions — is a better starting place.
Do you really know what you’re looking at?
Can you really tell whether the footage is from atrocities committed by Russian forces in the Donbas just because the headline blares it and you’re sympathetic to the Ukrainian cause?
Is the person who posted the footage an established reporter, someone who risks their status and prestige if it turns out to be fake, or some random person?
Is there a link to a longer video – the shorter the clip, the more you should be wary – or does it claim to speak for itself, even though the headline and caption leave little room for how to connect the dots?
These questions require no advanced knowledge of video forensics. They require you only to be honest with yourself. Your inability to answer these questions should be enough to make you realize that, no, you don’t really know what you’re looking at.
Patience Is a Powerful Tool
Social media reports of “late-breaking news” are not likely to be reporting at all, but they are often pushed by rage merchants wrapping an interpretation around a YouTube video accompanied by lightning bolt emojis and strings of exclamation points. Reliable reporters need time to establish what happened. Rage merchants don’t. The con artist and the propagandist feed on the impatient. Your greatest information literacy superpower is learning to wait.
If there are legs to the video, rest assured you’re not the only one viewing it. There are many people, some of whom have mastered advanced techniques of video analysis, who are likely already analyzing it and trying to get to the bottom of it.
You won’t have to wait long to learn what they’ve found.
Calls for national regulation of AI is growing amid a looming Presidential election and fears that more deepfake videos will be unleashed.
December 16, 2023
Posted
December 15, 2023
Editor Shelly Westerman Solves the (Post Workflow) Mysteries for “Only Murders in the Building”
Editing a mystery can be a delicate business. A reaction shot held a few frames too long can be a giveaway, too short and the eventual payoff could feel too obvious. This is challenging enough in a TV episode or a movie, but even more so in a ten-episode arc.
But that’s the kind of work the editors of “Only Murders in the Building” are responsible for. Season 3 is streaming now on Hulu.
The popular series, starring Steve Martin, Martin Short and Selena Gomez, is about to finish its third season, and editors Shelly Westerman, ACE; Peggy Tachdijian, ACE; and Payton Koch not only have to keep the reveals coming amid the show’s often absurdist humor and moments of pathos and drama, but they also have to attack some major musical numbers for the show-within-the-show “Death Rattle Dazzle,” at the heart of the season’s story arc.
In October, Westerman spoke with writer and film historian Bobbie O’Steen on stage at NAB Show New York’s Insight Theater. The duo discussed the “meticulous art of film editing.” Watch their full conversation (below).
The work of editing the series involves close collaboration among executive producers John Hoffman (the showrunner), Dan Fogelman and Jess Rosenthal; the writers, the directors and actors and the trio of editors, each of whom takes responsibility for particular episodes.
The editing process starts before cameras roll, when they receive that week’s script and virtually attend the table read in New York. Westerman explains, “Once you hear the words spoken, you hear the rhythms, you start to get an idea in your head, and you can begin visualizing an episode.”
This is followed by a concept meeting, featuring all the department heads. “We talk at a high level about the look and tone of the episode, and then we have a tone meeting specifically with the episode director and executive producers, and we go through the script scene-by-scene and talk about what’s happening.
“The director will propose all their questions and editors will chime in with questions, so there are a lot of very helpful discussions that happen early on.”
Each director helms two episodes, which are cross-boarded and shot in New York, usually with six or seven days allotted for each.
“Once they’re shooting one of your episodes,” Westerman explains, “we’ll start to get the dailies. What we see might match everything we’ve talked about to that point, or they might have discovered things on set that made the scenes go a very different way. But at least all the preparation lets us start with a grounding from which to work.”
Editors are given roughly two days to get their editor’s cut together and sent off to the director. “Then on a half-hour show like ‘Only Murders,’” she says, “the directors get about two days to work with the editor, before we need to turn that [cut] over to John and the other executive producers for their feedback.”
DRILLING DOWN TO THE WORKFLOW
Westerman, who a few years ago was adamantly opposed to the idea of remote editing (“I always said you must be in the room for creative collaboration,” she’d frequently asserted), has completely revised her feelings on the subject and acknowledges that she wouldn’t have even been able to have taken this job if it weren’t for the ability to work while also spending time in Florida caring for her parents. In fact, all three editors and each of their assistants work remotely.
While all work remotely, they are not actually doing the work on their computer or working with any of the media where they are. Boxes with Avid and media all sit securely inside the facility Pacific Post, where they are networked together via Avid-NEXIS.
Westerman, who works on a Mac “trashcan” wherever she’s set herself up to work, uses Jump Desktop to access her hardware and the network, as do the other two editors, though they happen to work on Mac Minis.
When dailies are ready, Westerman’s assistant, Jamie Clarke, is the first one notified. He will also have access to camera and sound reports and script notes, and he will QC the material to ensure that it’s all in sync and there are no technical issues.
Then Clarke organizes the scenes within Westerman’s system. Anything shot with more than one camera (most scenes in the show are covered by two and some of the musical numbers by three) into Group Clip projects, and he will load footage into bins to her specifications (each editor has their preferred method of organizing material).
“I don’t get the scenes in order,” Westerman says, “but I’ll start to build sequences pretty quickly, so that I can see how it’s flowing. By the time they finish shooting the episode, I’ve got a rough sketch of the acts put together. Then, for my two-day editor’s cut, I’m really trying to polish and tighten.”
MAINTAINING A SEASON-LONG MYSTERY
Westerman received a detailed briefing from Hoffman prior to commencement of production for the season. This provided her with a broad overview of all the episodes, “so we had some idea of what was going to happen as we got into the season.”
But that isn’t the only way to proceed, Westerman acknowledges. “Peggy said she didn’t want to know who the killer was,” she recalls. “She felt it helped her with the surprises because she was surprised, as well.”
Regardless, in a story propelled by a constant revelation and clues, there needs to be an ongoing overview provided. Hoffman and the other executive producers, Westerman says, “will sometimes look at a version we present and say, ‘Hey, we need to see this in Episode Three because we’re going to refer to it in Episode Six. So then, we go back and finetune the episode.
“The moment that Ben Glenroy [Paul Rudd] falls down the elevator shaft and the [three lead characters] run out of the elevator, turn back around to see what happened and Mabel says, ‘Are you fucking kidding me?’ — that scene comes back into play in a later episode where she’s looking at a hanky Ben is holding. I didn’t use that and one of the executive producers said, ‘The hanky’s important. We have to see her looking at it at that point.’”
There is also a moment where Charles (Martin Short) gets into a fight on the fateful opening night that kicks the season off.
“They shot the fight scene for use in Episode Nine, but then it turned out I needed to use some of it in Five, and Payton needed some of it for Six. But Peggy hadn’t cut Nine yet, so we all wound up pulling from her footage, using bits and pieces from the fight scene that worked for our episodes. Later, we went back to make sure we were all in sync with one another in terms of what we were using from the scene.”
This back-and-forth happens frequently, particularly for the recaps that show important moments from previous episodes. “One of us might do a recap and another one will say, ‘You’ve got to change that. That isn’t in the show anymore.’”
POLISHING PICTURE AND SOUND
Long gone are the days when editors turned in rough cuts with “insert effect here.” The final sound editing and VFX creation will continue after picture is locked, but directors and producers expect the editors to deliver scenes that are complete, and work as is. So much of that work commences while Westerman and the other editors are still sketching out scenes.
“The schedule is so accelerated compared to a feature,” the editor notes, “so as I’m going along and stringing together and polishing scenes, we’re also doing sound work, adding score, adding VFX. We’re doing all of that together so that by the time I get to the end of my editor’s cut, I’m hopefully in pretty good shape with a polished cut to present to the director.”
Editing assistants are generally skilled at basic VFX work, such as wire removal, and the show has a VFX artist on staff from the beginning of production who can step in and handle quite a lot of the work as it comes up.
“There’s a scene where one of the characters is in a basement threatening Charles and Mabel with a blowtorch,” Westerman recalls. “Of course, they couldn’t shoot with a real flame for safety reasons, so the VFX artist handled that.”
Sound Supervisor Matt Waters gets involved early on to build a wide variety of sound effects. As the season progresses, there are more and more sounds that can be re-worked and re-used. Fairly early in the season, the editors already had access to quite a few sounds of the theater where much of Season Three takes place. SFX such as specific doors opening and closing, and hallway background sounds were accessible to the editors and sound editors.
CUTTING MUSICAL NUMBERS
While the musical numbers are meant to be dramatic and advance the plot, they need to be approached differently from regular dialogue scenes. Especially, as “Death Rattle Dazzle” really gets on its feet and the routines get more elaborate.
Cutting musical sections is a different set of muscles, Westerman explains. “These are big numbers and big Broadway people like Sara Bareilles, Michael R. Jackson, Marc Shaiman and Scott Whitman were stepping in to help with the songs, so I’m not going to lie, it was intimidating at times.”
These scenes are generally shot with three cameras, and Westerman not only Group Clips all the angles from a take so she could watch them together, but she also has her assistant build what she calls a “super group” comprised of all the tapes of a certain setup, as well as all the coverage so she could observe every possible permutation of picture to each moment of the song.
When there is singing by say Steve Martin or Meryl Streep or one of the other performers, the songs were generally pre-recorded by the artist, who would then, during the shoot, sing live while being fed the playback through an earpiece so both the playback and live audio are available on their own clean tracks.
This approach leaves open the possibility of using the prerecorded audio or the live audio, depending on which plays best. In fact, many of the numbers are the result of the music and sound departments cutting extensively, sometimes syllable by syllable, to come up with the very best rendition.
“The performers go in and did recordings of all the songs a while before they were used,” the editor explains. “We get those early on so we can listen to them and get them in our heads and know the songs themselves. Then, once we get the scenes, we start assembling those right away because they took a little bit longer to craft. They’re technically more challenging. I’ll get it laid out first, and then I can go back and find these little moments that help tell the story.”
Once the musical scenes are cut, the music production team and music editor Michah Liberman goes in and re-works the sound, sometimes alternating between the prerecorded and the live versions.
“Sometimes, they’re literally cutting syllable by syllable in a very exacting, precise way. Finally, our sound mixer, Lindsey Alvarez, ties it all together.”
“There is a lot of teamwork on the show,” Westerman sums up, “and it’s been rewarding and fun to work on a show this good and be part of that collaboration.”
If FX’s The Bear reminds you of a Martin Scorsese film, you won’t be surprised editor Joanna Naugle is a devotee and used his movies as references for the show.
February 16, 2024
Posted
December 14, 2023
Translating “The Last of Us” From One Screen to Another
BY MICHAEL MALONE, BROADCASTING+CABLE
HBO’s adaptation of Naughty Dog’s wildly popular post-apocalyptic video game “The Last of Us” is a notable (and successful) example of seeking out IP from nontraditional sources.
(The Last of Us is set in a post-apocalyptic America, 20 years after a fungal infection has turned much of the population into zombies. Neil Druckmann created the show alongside Craig Mazin.)
Working with such detailed source material means you essentially need an army and Mazin confirms that “The Last of Us” takes “thousands of people” to make.
Mazin was joined on the NAB Show Main Stage by several of them for a panel moderated by THR’s Carolyn Giardina. The conversation also featured cinematographer Ksenia Sereda; editors Timothy Good, ACE and Emily Mendez; VFX supervisor Alex Wang; and sound supervisor Michael J. Benavente.
Mazin said, “There’s no way for a film to be by one person. There’s hundreds of people — in our case, thousands of people.”
Mazin described The Last of Us producers, cast and crew as “a big family.”
Mazin spoke of the “luck” involved in gathering the right producers to work on the show and how listening to them in interviews and chats tells him a lot more than their credits do. “I like talking to people,” he said, “and hearing their passion for things.”
Challenges of Adapting
Season one was shot in Alberta, Canada. The producers discussed the challenges of adapting the popular video game to series.
Alex Wang, VFX supervisor, described the game’s look as “so beautiful and so immersive. How do we use that as inspiration?”
Cinematographer Ksenia Sereda said the producers aimed for a balance between borrowing from the game and giving viewers something fresh. “We wanted to preserve the most iconic parts,” Sereda said, “but at the same time, we did not want to exactly copy the look.”
She spoke of the “massive” variety of choices for cameras and lenses, and said the ARRI ALEXA Mini gave the shots a realistic feel and helped the viewers get closer to the characters.
Mazin quipped: “I don’t understand any of that. I’m glad you do.”
Editor Timothy Good said he’d never played the video game before. Editor Emily Mendez, on the other hand, was a big fan. The two brought together their different perspectives to give the show a distinctive feel.
Key Moments
The editors spoke of the key moments in season one. Pedro Pascal’s Joel lost his teenage daughter in the pilot, and is reluctant to open himself up to another teen girl as he gets to know Bella Ramsey’s Ellie.
Ellie’s book of puns makes him smile for the first time in eons. “You can see the transformation between the two characters and how they sort of come together,” Good said.
Mendez mentioned Ellie stitching up Joel’s stomach later in the season, and the effort the producers went through to give the scene extra impact. “You’re with her in that moment,” she said.
Michael J. Benavente, sound supervisor, spoke of “a quiet world” in the show with no freeways, no kids on playgrounds, no airplanes. The viewer hears snowfall in one episode. “It really helps the story of the people,” Benavente said of the hushed vibe. “When you hear what they’re hearing, when you feel what they’re feeling.”
Season two will shoot in British Columbia. “This is what I do — I do The Last of Us,” said Mazin with a smile. “I couldn’t be happier.”
Cinematographer Patrick Capone, ASC, director Mark Mylod and senior colorist Sam Daley consider what made the series look and feel like that.
February 7, 2024
Posted
December 14, 2023
Editing “All of Us Strangers:” Shifts Between Real and Imagined
TL;DR
Director Andrew Haigh and editor Jonathan Alberts delve into the making of “All of Us Strangers,” revealing how the film holds a deeply personal significance for both filmmakers.
They explain that the tone of the film was tricky, noting the challenge of blending supernatural elements into its otherwise straightforward drama.
Haigh and Alberts wanted the audience to feel dislocated and consistently questioning the story’s reality and found music a creative help in achieving this effect.
Loosely inspired by Taichi Yamada’s 1987 novel Strangers, Andrew Haigh’s All of Us Strangers has garnered critical acclaim as a romantic-ghost story with a deeply personal touch. The British writer-director explored the film’s themes during a panel discussion at the New York Film Festival, describing it as an exploration of the desires, fears, and traumas unique to a specific generation of gay men.
“It was the most expensive therapy I’ve ever done. And it did feel like therapy, in many ways. The story is clearly not autobiographical but it’s definitely does come from a personal place. I wanted to tell an experience, as I see it, from a queer experience but not just my experience.”
The film is about Adam (Andrew Scott), a melancholy screenwriter living alone, who meets and begins a passionate relationship with the more extroverted Harry (Paul Mescal). At the same time, Adam begins another parallel journey to confront his troubled past and perhaps reconcile his unsettled present.
“A lot of the elements in the story are personal to me,” he revealed. These include filming in Haigh’s actual childhood home, that he last visited 42 years ago.
“But it was always about trying to tell a wider story about what it means to be a parent, what it means to be a child, what it means to be a lover and how we try and negotiate those complicated relationships that kind of come and go through our lives.”
Haigh’s script notably diverges from the original source material, where the character played by Paul Mescal was originally written as female.
“It has a different type of thing going on which works as a traditional ghost story,” he told NYFF programmer and panel moderator Florence Almozini. “It really does fit in with that traditional Japanese kind of ghost story style, which I like. But I knew that wasn’t the film I wanted to make. That wasn’t what was interesting to me about it. I wanted to find a more grounded reality of the story and then take it to somewhere different.”
In the film, Adam is preoccupied with memories of the past and finds himself drawn back to the suburban town where he grew up, and the childhood home where his parents (Claire Foy and Jamie Bell), appear to be living — just as they were on the day they died, 30 years before.
Haigh’s regular collaborator, editor Jonathan Alberts, found the script resonated personally with him too, telling Deadline’s Matt Grober that it felt like it was written with him in mind.
“We shared the experience of growing up in the eighties, growing up gay, kind of growing up with the specter of AIDS happening and trying to deal with all sorts of feelings of grief or trauma and shame and all of these things.”
While All of Us Strangers was tricky, both tonally and as a story rooted deeply in internal experience, another challenge of the project for Alberts was figuring out how to grapple with the way in which the protagonist ends up “slipping between these worlds of the 1980s and contemporary London” in the story.
“We wanted the audience to feel dislocated, but anchored, not mired in confusion, but consistently questioning, is this real? Is this not real?” says the editor. “I feel like you always want to have an audience ask those questions, and you want to keep them active, and to keep putting the puzzle together.
“But when you’re creating a film that is essentially a bit of a puzzle, it’s always a question of, is this puzzle going to fit together? Because you can create a puzzle that doesn’t quite fit together, and people are just like, ‘I don’t know what’s going on.’”
Alberts came to All of Us Strangers after collaborating with Haigh on numerous projects over the last decade, from films like Lean on Pete and 45 Years, to shows like HBO’s Looking.
“We’ve been working for about 10 years together. So when we’re busy working on a television show or film, he’s busy typing in the background, and I’m cutting. That’s when I first hear about the script. Then, typically, he’ll share with me a few months later.”
When they get to the first cut of the film, about a week after shooting, he says the director and he never sit in the same room and watch it together, “because you’ve worked so hard, it’s like you’ve spent a lot of time yourself and your assistants putting it together. It’s an extremely vulnerable time for a director and seeing all the problems or seeing all the things they didn’t quite get.”
Alberts explains that the tone of the film was tricky in not being a straightforward drama but one that introduces supernatural elements.
“We never wanted to be moving to a genre, we always wanted to keep it in a very subtle space. And it’s a very delicate line. I think music helped to draw that out.”
Through screenings they experimented with a lot of different notes to find what was working and what was not before hiring a composer.
“When we were shooting this film in London I would take the tube and the train and every day and I was listening to this Italian composer Caterina Barbieri, which we ended up using as temp soundtrack. She’s an amazing composer, we met with her and we thought about her doing a score. But eventually we kind of went in a different direction [hiring London based French pianist Emilie Levienaise-Farrouch]. But that evolved over several months and many discussions.”
Haigh adds, “It’s obviously quite an unusual film and I was always very scared that the central conceit wouldn’t work. There are a lot of turns in the story that I was worried would not work. I wanted, even in the present day of the story, to feel slightly shifted from reality, even though that is based on an apartment block in London. It was really important to me that the tone just felt [to an audience] like ‘I’m not quite sure when and where this is set’.
“We thought really long and hard about trying to create a tone that made you feel like you were somehow separate from time. And that would allow you to understand the kind of conceit of the story and make it feel real when you suddenly go back and see parents.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 23, 2024
Posted
December 13, 2023
“Poor Things:” Making This Crazy Fantasy a Reality
TL;DR
The Searchlight Pictures release and Venice Festival Golden Lion winner “Poor Things” is an awards season favorite on multiple counts, not least the cinematography of Robbie Ryan.
Ryan explains his use of various wide angle and vintage lenses used to shoot within large scale “composite” sets using virtual production techniques.
Production designer James Price built four large composite sets at Origo Studios in Hungary, which deployed painted backdrops and cutouts as well as LED walls.
Portions of the film are also shot in black and white, with the final decision to do so only agreed upon at the last minute.
Director Yorgos Lanthimos saysreferences for the creative team included “Dracula” and the films of Fellini and Fassbinder, as well as Powell and Pressburger — all famed for their surrealist and extreme screen visuals.
In what critics are hailing as his boldest vision yet, auteur Yorgos Lanthimos (The Lobster, The Favourite) delivers Poor Things, a punkish Frankenstein update that metamorphoses into a feminist fairy tale.
In the Searchlight Pictures release and Venice Festival Golden Lion winner, Emma Stone plays a peculiar, childlike woman named Bella who lives with a mysterious scientist and surgeon (played by Willem Dafoe). The movie is set in an alternate version the 19th century and based on the 1992 novel by Alasdair Gray.
“I read the book around 2009 and immediately fell in love with it,” the director explained during a Q&A session following a screening at NYFF. “I hadn’t read anything like it. And it was mainly the character of Bella Baxter that I was drawn to.
“I just thought she was just this incredible, unique human being. The world of the novel itself, all the characters and the premise of it allowed you to explore the story of this woman who has a second chance in life to experience the world in her own terms.”
Lanthimos says that other references for the whole creative team included Dracula and the films of Fellini and Fassbinder, as well as Powell and Pressburger — all famed for their surrealist and extreme screen visuals.
In the same on-stage discussion, production designer James Price explained how he built four large composite sets at Origo Studios in Hungary, “which become something more like an immersive set, a like a Disney theme park,” he said. “Nobody builds set this big anymore.”
They didn’t use vast canvases of green screen. Instead they deployed painted backdrops and cutouts “techniques that nobody ever does anymore,” Price said.
Costume designer Holly Waddington, also on the panel, said she drew inspiration for fabric color from anatomical drawings and bodily fluids, “the yucky ones and the beautiful ones and everything in between… pinks and saturated reds and lilac sort of tripe colors. I always tried to relate it to something a bit revolting.”
For The New York Times series “Anatomy of a Scene,” Lanthimos dissects a sequence that takes place in a restaurant in Lisbon, and explains how Emma Stone with choreographer Constanza Macras devised her deliberately awkward dance moves.
Arguably, the real star on the technical side is Irish cinematographer Robbie Ryan, ISC, BSC, working on his second film with Lanthimos after being nominated for an Oscar for The Favorite.
Describing the director himself as “an astral cinematographer,” Ryan says that the desire was always to shoot on 35mm.
“That sensibility is something that kind of lands with the rest of the film where you’ve kind of got a whole sort of universe that is unique,” he says in an interview with Denton Davidson for GoldDerby. “Yorgos wanted to create a world for Bella to be in that nobody else would see. [We see it] only through her eyes.”
The DP’s work started with three months of prep to try out new film stocks and various lenses. “We did one test where we had about 50 lenses that we had to look through, and we had to get through that in one day,” Ryan told John Boone at A.frame. “It was a process of evolving and discovering as far as sorting the language for what we were going to do.”
Shots with an extreme wide angle 8mm fisheye lens were used to explore Baxter’s lab, and are of a type that he employed on The Favorite. This time the wide angles give the impression of almost looking through a peephole or a magnifying glass.
“This is extension of the wide angle language that Yorgos has been of developing over other films,” he told Davidson. “We wanted [to recall] the old vintage photography, where you would see a lot of vignette sort of kind of effects, because the big plate cameras that would have been used in early photography had la lens that didn’t cover the full width of the glass plate that would have been used for the camera.”
The extreme wide angle lenses paired with a 35mm camera allows the viewer to feel like “you can almost step into the world,” he says.
Vintage Petzval lenses, originally ground in 1910 for projectors, were also deployed for period effect.
“They’ve been rehoused which made them possible to shoot portraits as a camera lens,” he told Screen Rant’s Caitlin Tyrrell. “They had this beautiful way of creating a soft fall, a shallow focus and a kind of a crazy bokeh. But they evoked a lot of early photography. It makes me feel that we are connected a bit to the old world of photography, almost painterly. I remember the production design team mentioning Hieronymus Bosch quite a bit in prep.”
VistaVision lenses from another antiquated filming technology were adapted for use in a specially constructed “Frankenstein camera,” Ryan told an audience at Camerimage, as Variety’s Will Tizard reports. This achieved the desired period look but was tricky to work with, he said.
He also said that the results at times bordered on “mystical,” citing an incident when the camera’s “crap batteries” began to run down as he was filming Bella awakening from the dead. The film’s slower transport speed resulted in a slightly sped-up Stone sparking to life in a way no one had quite anticipated.
With Lanthimos adamant that he wouldn’t do additional dialogue replacement in post-production, it also meant the VistaVision camera could only be used for scenes where capturing dialogue on set wasn’t an issue.
Augmenting these old-school techniques and kit was the use of a virtual production screen to help create the views from the cruise ship.
Ryan called the 70-meters-long by 20-meters-high wall a “moving painted backdrop” in an interview with James Mottram for British Cinematographer. “For the cruise ship, Yorgos was always very keen to try out an LED backdrop, because then we could have the waves moving and the clouds moving,” he said.
Even though the set was small enough relative to the wall, shooting on wide angle meant they had to mask the ceiling. There were also issues with needing to illuminate the foreground set with a lot of light because he was shooting on negative film.
“That spillage of light is really painful, because it makes the LED wall lose its punch,” he revealed. “So, you’re kind of having to balance out so much all the time… it was a technical head wreck to try to keep the light on the deck, but not on the screen. And the fact that the deck of the ship was only probably four meters away from the LED wall made it really very difficult to stop this light spill, which made my life hell!”
Even the film stock itself was pushed to the extreme. Portions of Poor Things are shot on black and white, while Lanthimos was keen to shoot other sequences using Ektachrome. Because he wanted Ektachrome in 35mm, Kodak had to manufacture it specifically for the film.
“They only ever made it a 16mm Ektachrome, so Kodak cut it to 35mm and we processed it as reversal for reversal,” he explained to A.frame. “That’s something that’s never been done before. It’s actually a lot more versatile a stock than I thought it would be, but when we were filming with it, we were under the impression that if you were to underexpose, it would be irretrievable. So, I was [thinking], ‘Oh my God, if this stock comes back underexposed we’re in trouble.’ So you had to get it right. But the results were beautiful.”
Another challenge for Ryan was learning to shoot a lot of the picture on zoom lenses. Since he also operates the camera he had to perfect zoom control, as he explained to A.frame.
“For me, the wide angles are not difficult. I just put the wide angle on and everybody else — production design and sound — has a nightmare. The challenge for me, camera-wise, was the zoom, because I didn’t want to mess up any of the acting. I got the hang of it, but it was still nerve-racking and it pushed me to my limits.”
The studio-bound film was also unusual for a DP who typically shoots on location. This presented particular challenges around the lighting.
“The great thing was it kind of still felt like we were on a location because they just built the locations in this amazing detail,” he told Denton. “So everything in front of the camera is there. It was the same approach I would do normally, just I had to do a lot more lights and we had to build skies for cities like Paris and Lisbon.”
The wide-angle scope was so extreme that the fantastically detailed Victorian-style sets had to be created to all but completely wrap around the camera — which also made hiding lights and sound gear a challenge.
“They created all these composite sets, where you can walk in the front door and every little thing is shootable,” Variety reports he said at Camerimage. What’s more, Ryan added, is that sets don’t fly away to make space for the camera as it passes — instead, it must move through real rooms, halls and up and down stairs.
Choosing to shoot half an hour of the film in black and white and on B&W film stock, rather than shooting color and converting to B&W in post, was another key creative decision.
Davidson gets Ryan to talk about this in relation to the opening shot of Emma Stone dressed in an elegant blue outfit which then cuts to the black and white footage.
The reasoning, says Ryan, “and I’m probably gonna get in trouble for saying it,” is that if audiences saw a black-and-white image at the beginning of the film they might think the entire film was going to be black and white, and might tune out.
“I think we put a color shot out of the start so everybody will think it’s a color film, and then it goes to black or white, [then] goes into color again. That was sort of the theory behind that,” he continued.
“What I love about the use of color and black and white in the film, is that usually when you see a film, flashbacks are in black and white. But in this film, the film is in black and white and the flashbacks are in color.”
Ryan revealed to Movieweb that the decision to shoot black and white came just nine days before principal photography. “Yorgos said he had to go ring the producers at Searchlight, and it was like touch and go whether they would let him do it.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
March 31, 2024
Posted
December 11, 2023
How Martin Scorsese and Thelma Schoonmaker Reworked and Reframed “Killers of the Flower Moon”
TL;DR
With her 22nd collaboration with Martin Scorsese, Thelma Schoonmaker, ACE, talks about the process of changing the film midway into production to focus on the central love story.
The celebrated editor discusses how they test cuts both with each other and audiences.
She and Scorsese are longstanding cineaste curators — and financiers — of the legacy of the great mid-20th-century British filmmaking duoMichael Powell and Emeric Pressburger
Veteran editor Thelma Schoonmaker, now 83, is a graceful, generous and fascinating interview subject as she discusses Martin Scorsese’s Killers of the Flower Moon.
“The love story is the basic thing that Marty decided to focus on,” she told Matt Feury of The Rough Cut podcast. “When the idea about the film changed, because Leo DiCaprio decided he would like to play Ernest instead of the role of the FBI man [Jesse Plemons]. That was a dramatic change you can imagine in the script, and they were still working on that as we were shooting. Lily Gladstone and DiCaprio were working with Marty to create scenes that would show the evolving love story.”
She describes how the film teases out the complex character of Ernest, as someone who seems both to have genuine affection for his Osage wife, and yet is capable of facilitating murder.
“The audience enter this world and learn and experience things through Ernest, but we’re not really aligned with him because we only get a true sense of who he is, the atrocities and the violence, over time.”
The opening scene, for instance, depicts Robert De Niro’s character sizing up his nephew, much as the audience is.
“The way we worked on the rhythm of that scene, was to make sure that we sometimes paused for a few seconds, more than you normally would. Because you see that De Niro’s trying to make up his mind. What questions should I ask next to find out if this guy’s going to be a tool? As Ernest is. It’s obvious in the film that he doesn’t read, for example. He’s been horribly educated, whereas his uncle is much better educated.”
She and Scorsese tend to screen the movies they work on in multiple different cuts, fine-tuning in reaction to select audiences, as she explained to Craig McLean at Esquire.
“With our movies, we do rough cuts — sometimes as many as 12,” she said. Those cuts-in-progress are screened for people in her and Scorsese’s New York and Hollywood inner circles. “Then we start opening up to people we don’t know. Then we go to bigger audiences. And we learn from what we’re hearing, and then we do another cut.
“Then we screen again, and then we do another… we’re very lucky. A lot of editors aren’t given that kind of time, which I think they should be.”
Schoonmaker explains to Art of The Cut, “The fact that somebody who doesn’t know the movie is in the room with you affects you deeply. You’re very very conscious of people moving, or do they laugh? Or don’t they laugh at the right place? Or the wrong place? How are they feeling afterwards? Of course, we do talk to people at length afterwards to find out how they’re reacting.”
Sometimes there are big changes in direction — as was the case for Killers of the Flower Moon. “We usually do move things around when editing, except for Goodfellas where everything was perfect right from the start,” she told Feury. “That movie was like riding a horse. It knew where it wanted to go. We dropped only one shot. That film was just there.”
Honoring the Osage and Recognizing Powerful Scenes
Killers of the Flower Moon is dedicated to the memory of musician and composer Robbie Robertson, someone who’s had a hand in the music in various ways for many of Scorsese’s films since he recorded The Last Waltz (featuring The Band’s last concert) in 1976
Schoonmaker says the score’s throbbing baseline was something that Robertson came up with. “This culture, as you see in the last shot, and the dances that they do, are very sacred, you have to be invited to them, they’re not tourist things. So the drums are incredibly important. The Osage actually consider the drum a person as they do the pipe.
“So I think that Robbie being half Mohawk, Marty definitely wanted an indigenous person to do the music, and felt that this would drive the movie all the way through to the end. You know, it also is probably blood running through your veins. The fact that he continuously employed it meant that in his mind, he was giving it to Marty as a way to move the film along.”
In addition to the scoring, Scorsese wanted to emphasize the indigeneity of the characters. He did so in part by including several pivotal scenes in which Osage is spoken but no subtitles are provided.
Schoonmaker tells Steve Hullfish in an episode of The Art of The Cut, “There are many times in the movie where you do hear the Osage purely, which is a very, very good decision which I resisted at first. Not hearing it by itself. You don’t need to know what he’s saying in the wedding ceremony, for example, you know, he’s marrying them, right?”
However, in a late scene in which DiCaprio and Gladstone’s characters are arguing, Scorsese again opted for no subtitles, which Schoonmaker says “was a very brave and correct decision.”
Although Schoonmaker and Scorsese have worked on many projects together over the years, her instincts don’t always mesh with his, at least initially.
Scorsese knows, she says, he “could trust me to do what was right for the movie, that we weren’t going to have ego battles in the editing room about who’s right and who’s wrong.
“So when there ever is a really major disagreement, which is rare, I am always more than happy to show him what he has asked for. And then if I want to show him options, then I show him options. And he’s very happy to look at those. And then we’ll decide which one is best. But it’s never a battle.”
But, she says, “There’s never a problem when something’s that powerful. There’s never a question” of what to do, referring to DiCaprio’s performance in the courtroom scene.
For maximum impact, Scorsese instructed Schoonmaker to “cut away only when we absolutely have to. I want to just hold on Leo for the entire duration of the testimony because he is so brilliant.
“And he is. So we only cut away when the prosecutor points to De Niro and says he is now talking about this man. And that switch pans over to De Niro because Leo has just incriminated him.”
(And by the way — Schoonmaker is adamant that the film should be viewed in one sitting, with no pauses (even at home) to fully appreciate these cuts, regardless of the 206-minute runtime.
Dazed Digital’s Nick Chen reports that Schoonmaker was incensed to hear some theaters screened the film with an intermission: “That’s really horrible. There’s a build. It’s very important. There’s a long build that you have to feel. If you cut it, you’re not going to feel that! Don’t pause it!”)
Powell Pressburger’s Oeuvre
Her custodianship with Scorsese of the film œuvre of Michael Powell (The Red Shoes, A Matter of Life and Death, Black Narcissus — with Emeric Pressburger — and Peeping Tom), her late husband, crops up time and time again. The BFI in London recently held a career retrospective including newly minted versions of films like The Red Shoes, and Schoonmaker is a more than able commentator.
She tells Feury, “Michael Powell and Emeric Pressburger used to do what they called place little bombs in a movie little things that you may just barely notice that explode later. That is something that Marty would have noticed in his, you know, devouring of the Powell Pressburger films.”
“Marty says The Red Shoes are in his DNA,” notes Schoonmaker to Esquire. It’s a film that she first saw aged 12 while living on the Caribbean island of Aruba, in an “American colony” created by Standard Oil.
Returning to the U.S., aged 15, she tuned into a “wonderful TV show called Million Dollar Movie, where they ran one film nine times a week.” She later learned of another avid viewer: “Marty would [try to] watch a Powell and Pressburger movie [all] nine times unless his mother said: ‘If you don’t turn that off, I’m going to start screaming.’
That’s because, with the rise of realism in British cinema — “kitchen sink dramas” such as Saturday Night and Sunday Morning (1960) and ThisSporting Life (1963) — the films of Powell and Pressburger fell out of fashion in the UK. They were viewed as conservative, colonial, old-fashioned.
Key in that canon is Powell’s transgressive horror from 1960, Peeping Tom. In 1979 Scorsese arranged for Peeping Tom to be shown at that year’s New York Film Festival, and then paid for its redistribution in U.S. cinemas.
To mark the moment, Scorsese held a dinner in New York in Powell’s honor. He invited along the editor he’d hired to cut his latest movie, Raging Bull, partly on the advice of Powell.
“I was just so struck by Michael,” recalls Schoonmaker, who had last worked with Scorsese on his debut feature, 1967’s Who’s That Knocking at My Door.
“He was so extraordinary. He came back to talk to me — I was editing RagingBull in a bedroom, and we had film racks in the bathtub.”
That was how Schoonmaker and Powell met. They married in 1984. He died a decade later. “Marty gave me the best job in the world and the best husband in the world!”
“I grew up watching westerns,” says Scorsese. ”So for me to see [this film] on the screen — beautiful palomino horses — was heaven for me.”
December 26, 2023
Posted
December 10, 2023
Deloitte Anticipates 2024 Will Be a “Transitional” Year for Generative AI
TL;DR
Deloitte’s annual Technology, Media and Telecoms report highlights generative AI,sustainability, and monetization in its preview of 2024.
Streamers will charge more for premium content, fight user churn with longer subscriptions, and satisfy bargain hunters with additional pricing tiers.
More companiesare expectedto develop their own generative AI models to drive greater productivity, optimizecosts, and avoid risk of training on public data.
More and more companies are measuring their carbon output, but are they doing enough to forestall obvious global warming?
Generative AI, sustainability and monetization are the major themes of the year ahead, with GenAI set to dominate strategies in Media & Entertainment, according to Deloitte.
In its annual review of Technology, Media and Telecoms, the consultancy says 2024 will be a transitional year for generative AI as companies begin to incorporate it into their tech stacks, while being cognizant of pending legislation.
Expect to see generative AI integrated into enterprise software, with enterprise spending on GenAI anticipated to grow by 30%. The challenge will be in finding a pricing model that captures its value, covers its costs, and is embraced by customers. Deloitte anticipates a conflict between vendors who want to charge per user and IT departments that believe generative AI features should be free.
“Generative AI is poised for a breakthrough in 2024, as it begins to follow through on its promise of improving productivity, creativity and enhancing the way enterprises engage with their ecosystems,” Deloitte Vice Chair Paul Silverglate says.
However, Deloitte Chief Futurist Mike Bechtel warns Fast Company’s Chris Morris: “Over-focusing on any one superhero technology tends to take the eye off the bigger picture.” In this case, Bechtel says an over-emphasis on AI might mean losing sight of “human/computer interaction, compute, the business side of tech, cyber, [and] core modernization.”
Nonetheless, more companies are expected to develop their own generative AI models to drive greater productivity, optimize costs and unlock novel insights. The creation of in-house AI models is intended to avoid the risk of training on public data.
To that end, 2024 will also see the first serious attempts to regulate AI, which is either a much-needed check on the misuse of the technology’s power or a stranglehold on innovation, depending on how you see it.
Most likely, legislators will scope a middle way that permits AI to grow while attempting to rein in the rampant impact of deepfakes, misinformation and copyright infringement.
Deloitte argues that “clear regulation” enables enterprises and vendors to “proceed with certainty,” and expects a to see a pragmatic balance between regulatory compliance and fostering innovation.
The European Union’s AI Act (perhaps not ratified until 2025) will likely be the global benchmark for regulation of generative AI. The AI Act and the General Data Protection Regulation (GDPR) address issues including individual consent, rectification, erasure, bias mitigation and copyright usage.
It is also likely that AI will keep lawyers busy for decades to come (until they too find their jobs automated out of the way).
Related is the note that the computer processors powering generative AI could represent half of the value of all semiconductors sold by 2027, valued at $400 billion.
Deloitte predicts that the market for specialized chips optimized for generative AI will be valued at more tahn $50 billion in 2024, up from close to nothing in 2022. The secure and reliable manufacture of AI chips is seen as vital for national and business specific innovation, economic success, and national security.
A Streaming Price for All
The drive to profitability is already a feature of content streamers who have switched from prioritizing subscriber numbers (with the domestic market saturated anyway) to appeasing itchy Wall Street investors.
As Deloitte puts it, M&E companies seem to be realizing how hard it is to recoup the historic profits of the pay TV business model.
Its expected market correction is the widening of the streamer business model to more paid for tiers and loyalty schemes, from an average of four options to eight. These range from cheap ad-supported offerings and gated content to premium tiers with instant access.
Deloitte predicts that the top five providers will offer a bewildering 17 SVOD tiers by the end of 2024, more than double the current number.
“Streamers are expected to shift from growth at all costs to making it easier for all their subscribers to get enough value for the price. Viewers may find it harder to wade through the options, but tiering could help them get more of what they want, and less of what they don’t.”
Related is the note that the audio entertainment market is on the cusp of “significant growth.” The global market is predicted to top $75 billion in 2024, a 7% rise across formats like podcasts, streaming music, radio and audiobooks.
Hollywood Looks to Game IPs
The success in 2023 of The Last of Us and animated feature The Super Mario Bros. Movie, not to mention the reality show Squid Game on Netflix, has convinced Hollywood that games can finally be adapted in a way that speaks to both fans of the original and lean-back newbie viewers alike.
“Hollywood is looking to games for new IP that they can expand and monetize, and game companies are eyeing TV and film collaborations to help make their IP work harder and offset soaring game development costs,” notes the consultancy.
It’s not just about capitalizing on IP, though; it’s about creating a new form of entertainment that captivates audiences across multiple platforms. High-performing game IPs are expanding across media formats, reaching broader audiences and increasing their overall franchise value.
“Gaming platforms are giving users the tools to create their own games, which could lead to a boom in quality content, but could be a threat to their own business longer term,” Jana Arbanas, vice chair of Deloitte and US TMT, noted in a statement. “And fans of top franchises will see their favorite characters and stories in both games and movies. It’s a crucial time as the industry finds new and profitable ways to keep audiences engaged.”
Further, user-generated content (UGC) in games could disrupt the industry. Platforms are projected to pay out almost $1.5 billion to content developers in 2024. The number of paid independent developers on 3D UGC gaming platforms will exceed 10 million.
“As this space grows, it risks disrupting the dynamics of the entire gaming industry by making endless cheap 3D content available, with generative AI possibly accelerating the trend,” Deloitte warns.
Green Commitments Growing and Questioned
It’s likely that 2023 will be the hottest year in recorded history, yet government leaders at climate summit Cop28 barely moved the dial on change. The Cop28 president, Sultan Al Jaber, even claimed there is “no science,” indicating a phase-out of fossil fuels is needed to restrict global heating to 1.5 degrees Celsius.
With that kind of leadership, is it any wonder that businesses in all sectors are back peddling on net-zero commitments?
Deloitte predicts that multiple regions will run short of precious metals like gallium and germanium in 2024, and may start seeing shortages of rare earth elements by 2025. If trade restrictions between China and the West escalate, the tech and chip industry could consider “bolstering supply chain resilience” by increasing investments in e-waste recycling, digital supply networks, stockpiling and sustainable semiconductor manufacturing.
According to Deloitte, modern, new greenfield plants for making AI chips could help improve the industry scorecard, but further manufacturing transformation can help both the greenfield plants and existing brownfield plants do better for energy, water and processed gas use.
What Deloitte doesn’t appear to address is the carbon impact of generative AI processing itself. Google, Microsoft and others are shy of revealing how much power in their data centers is being used to crunch through R&D on new LLM tools. The sustainability of GenAI should receive far more scrutiny in 2024.
On the plus side, and pushed by investors, regulators and employees, many more companies will likely systematize their environmental, sustainability and governance (ESG) tracking and reporting with software tools in 2024. Deloitte expects the market for these tools to rocket beyond $1 billion in 2024, growing as much as 30% over the next five years.
Industry analyst Forrester predicts wide adoption of bring-your-own-AI (BYOAI) tools, with organizations struggling to manage the impact.
December 10, 2023
Survey Says: In 2024, Gen AI Will Aid Creativity, Hurt Authenticity
An increase in creativity will be AI’s biggest impact on the creator economy, say 30% of content creators polled by video creation platform Artlist. However, 29% predict AI will result in a marked decrease in authenticity.
“Creativity is no longer the domain of a select few. With the tools and resources now available to everyone, anyone can create and share their ideas with the world,” says Google’s Amir Ariely. “This democratization of creativity is leading to an explosion of new and innovative content, from art and music to video and writing. As creativity becomes more accessible, we can expect to see even more groundbreaking work from people of all backgrounds and experiences.”
However, the creativity versus authenticity problem is particularly salient in the age of social media. Audience engagement lives or dies according to perceptions of credibility, and nearly 60% of respondents indicated that they were YouTubers, content creators or social media influencers. The remaining participants identified as freelance creatives, indie filmmakers and vloggers.
Per the survey of more than 7,000 people, 24% are hopeful that AI will create more job opportunities that will lead to higher income, but 17% think the technology will generate more competition that will result in lower incomes.
Let’s add some context: About half of those surveyed said that they are paid by clients for their work, and 15% draw a salary from a company, while 12% rely on social media monetization. About 20% rely on subscription models or brand partnerships for revenue, and 26% say none of the above are how they make money from projects.
Only 14% of those surveyed say they create video content as a hobby, while 46% indicate that they make a living from this work. More than 15% say they supplement their income with video projects, while nearly a quarter say they “aspire to” make a living from their work as video creators.
Half of respondents say their video content generates $5,000 or less annually; 33% averaged somewhere between $5,000 to $50,000 per year; and 10% said they made more than $100,000 (4% of whom said that income was $500,000 or above).
AI Attitudes
It’s also important to factor in how familiar the respondents actually are with generative AI. Forty-four percent of those surveyed say they know the basic functions of generative AI tools, and 27% think they are very proficient. However, 29% admit that they don’t know how to use generative AI tools.
Also, 29% of respondents indicate that they sometimes use AI tools while working, and 23% say these tools are already “a big part” of their process. Twenty-two percent believe they will use these tools in the future, while 23% say they don’t think AI tools will help them at work and 3% think they’re too difficult to use for content creation.
At this point, half of respondents think generative AI hasn’t affected their work at all. But 31% say it has created more work opportunities and income, and 10% say genAI has meant more opportunities but reduced income. For 9%, AI has been a net negative, resulting in both lost work and lost income.
The survey also checked attitudes about generative AI and content authorship. Slightly more than half (52%) of those surveyed said they would publish AI-generated work under their own name because they “worked on it,” while 48% they would not because they “cannot take full credit” for the end product.
In the future, 52% predict their work will become easier because of the time-saving nature of AI tools; 13% think it will ease work through inspiration; and 10% say AI will reduce the amount of effort required for work. Some respondents are concerned that AI will complicate their work process: 15% fret that AI will take time to get good results, and 10% think that copyright and royalties concerns will be a drag on their efficiency.
With those beliefs in mind, let’s consider the type of work participants say that they create: 32% make promotional videos; 30% short films; 24% commercials ; 23% vlogs; 13% music videos; 12% podcasts; 12% tutorials/how to videos; 9% product reviews; 8% feature films; and 25% other. (Note that respondents could choose more than one answer). All of these areas are ripe for disruption and innovation, but AI tools are more fully developed for some use cases than others.
Across M&E, generative AI will be increasingly used to drive down costs resulting in the replacement of many human jobs by computers.
Coming into play in 2024, there will be guardrails in the form of case law surrounding copyright as AI exponents assert their right to fair use and artists fight back.
AI is now a marketplace and open to any number of startups including in the M&E space. On a wider level there will be a battle between Big Tech closed models and smaller open source startups.
Battle lines will be fought in three areas: the legal right to use AI; between open source and proprietary AI tool developers in IT; and Hollywood Studios versus the legion of employees from A-list talent to production crew.
Few industries will be more directly impacted by generative AI than Media & Entertainment and as it evolves in its second year, the battle lines are being drawn.
Broadly, the battle lines will be fought in three areas: the legal right to use AI; between open source and proprietary AI tool developers in IT; and Hollywood Studios versus the legion of employees from A-list talent to production crew.
Hollywood and AI
The resolution of the strikes both for actors and writers has only advanced the issue a couple of years down the road. Ultimately, it would seem, M&E is going to be shaken up for better and for worse.
“The last decade in film and TV was defined by the disruption of content distribution and the next decade will be defined by the disruption of content creation,” summed up industry analyst Doug Shapiro in an extensive post on Medium explaining how all aspects of production would impacted.
In its special report, “Generative AI in Film & TV,” Variety found the tech already beginning to disrupt traditional methods, with generative AI tools currently used to automate some creative tasks. Its impact stands to be positive, Variety concluded, “as it eliminates rote work, speeds project timelines and allows productions to pursue previously impossible creative paths prohibited by constraints on cost, time and even physical reality.”
At the same time, Variety notes that its use promises to reduce the need for certain processes and workers to achieve the same level of output. Spelled out: That’s job losses.
Shaprio breaks down the costs of production costs for movies including the 50% of “below-the-line” crew and production costs of which 25-30% is post-production (and of this percentage, mostly VFX). All in all, roughly two-thirds of these costs are labor, he says. “It is a sensitive topic for good reason, but over time GenAI-enabled tools promise (and threaten) to replace large proportions of this labor.”
Practical use cases are already cropping up across all stages of the TV and film production process. These include story development, storyboarding/animatics, pre-visualization, B-roll, editing, VFX and localization services.
How far will this all go? Even making the relatively conservative assumption that TV and film projects will always require both human creative teams and human actors, Shapiro says future potential use cases include: the elimination of soundstages and locations, the elimination of costumes and makeup and even “first pass editing.”
“In the future, it is likely that editing software will make a first pass at an edit, which can then be reviewed by a human editor,” he suggests. “Similarly, it’s easy to envision an editing co-pilot or a VFX co-pilot that could create and adjust VFX in response to natural language prompts.”
You can argue, as Shapiro does, that we have a “visceral negative reaction” to anything that’s supposed to look human but doesn’t, the so-called ‘uncanny valley.
“In which case we will still need human actors, possibly for a long time — but it would also mean that every other part of the physical production process would be subject to being replaced synthetically.”
All of this will likely have a profound effect on production costs. “Over time, the cost curve for all non-Above The Line costs may converge with the cost curve of compute,” is Shapiro’s possibly true if disheartening conclusion.
The potential for lower production costs would seem a silver lining for Studios but it also presents a daunting change management challenge.
“Studios should start either by experimenting with non-core processes or developing skunkworks studios to develop ‘AI-first’ content from scratch,” Shapiro says.
Peter Csathy in TheWrap thinks the major studios, faced with mounting Wall Street pressure to transform their business models, will begin to focus on generative AI “to increase output and cut costs.” Early experiments he suggests will include hyper-automation in visualization and initial uses of “Synthetic Performers.”
Streamers like Netflix, “with Big Tech DNA coursing through their veins,” will lead the way, he says.
Legislation to Tackle and Protect
The EU and the US Congress as well as individual states at the federal level will pass significant AI legislation that directly impacts the M&E industry in the next 12 to 18 months. President Joe Biden’s recent Executive Order points the way.
“Congress will demand that the Big Tech companies behind GenAI give some basic level of transparency about the material on which their large language models are trained,” says Csathy. “Regulators will also try to get ahead of the game — a stark contrast to when they were largely absent when social media rose in popularity and importance (and caused significant harm).”
Csathy expects the creative community to do its best to keep AI companies honest by implementing so-called “forensic AI tech” like watermarking to identify whether relevant creative works were “scraped” or not. That in turn will promote “opt in” solutions for AI training.
Startups to Rival Big Tech
The battle between proprietary AI and open source AI is at its fiercest. Broadly speaking this is the battle between Big Tech and smaller start-ups and the battle is being fought in the market. Perhaps OpenAI/ChatGPT’s lasting legacy will be in opening up the first bona fide market for AI. In fact, AI — as the moving chairs at OpenAI have shown — is no longer controlled by scientists in the lab but by Wall Street.
CambrianAI analyst Alberto Romero, in his blog The Algorithmic Bridge on Substack, characterizes the debate like this: “The open-source scene is vibrant, full of enthusiasts who firmly believe AI shouldn’t be in the hands of the few and are working relentlessly to make their vision of a better, democratized world through AI a reality. They have detractors who think AI, as a (potentially) very powerful (and thus dangerous) technology, shouldn’t be available for anyone to use.”
He adds, “If the open-source community wasn’t pushing as hard as it is, closed businesses would capture all the value.”
Open source-based startups are also growing in number and in quality of output.
“They’re catching up with the best models, such as GPT-4,” says Romero. “While closed-source LLMs [like ChatGPT] generally outperform their open-source counterparts, the progress on the latter has been rapid with claims of achieving parity or even better on certain tasks. This has crucial implications not only on research but also on business.”
He thinks that the era of extremely large models dominating AI was just a phase and it’s coming to an end.
“Small and cheap is the future,” he says. “Open-source AI is becoming a powerful counterforce to Big AI as more people realize that this tech shouldn’t be in the hands of a few — it’s catching up.”
Csathy thinks Big Tech companies like Alphabet will try to have it both ways. “Desperate to keep up with OpenAI (and Microsoft) Alphabet will relentlessly march on with its AI development while trotting out its new SynthID watermarking solution to quell the creative masses,” he predicts.
“Alphabet throws these bones to the creative community, while its stock price rockets upward and the entertainment industry struggles to monetize amidst its continuing transfer of wealth to the Big Tech players that disrupt it.”
Industry analyst Forrester predicts wide adoption of bring-your-own-AI (BYOAI) tools, with organizations struggling to manage the impact.
December 7, 2023
Bring Your Own AI (BYOAI) If You’re Planning to Transform the Workplace
TL;DR
Industry analyst Forrester predicts next year will be a particularly good one for Google, Meta and TikTok, platforms that can harness generative AI and/or Gen Z.
In its “Predictions 2024: Media And Advertising” report, the analyst foresees that organizations will increasingly embed AI into their data and analytics before moving on to wider implementation across the workplace.
However, this won’t be fast enough since half of employees will adopt bring-your-own-AI (BYOAI) tools to the workplace, with organizations struggling to manage the impact.
The media industry will regain confidence in 2024, fueled by the rise of generative AI and stabilizing advertising revenue, according to industry analyst Forrester. Google, Meta, and TikTok in particular are poised for a strong 2024.
In its “Predictions 2024: Media And Advertising” report, the analyst foresees that generative AI “will transform Google into the next Google.” It explains that as Google harnesses GenAI, it will help the company sustain dominance as the number one search engine. Forrester conducted a survey with its ConsumerVoices Market Research Online Community and found that 73% of online adults would rely on Google to verify suspect responses from ChatGPT.
“In 2024, Google will leverage its credibility and commercialize its C4 data set to deepen its moat as the crawler and repository of reliable information,” Forrester states.
It predicts that TikTok will gain the lion’s share of linear TV budgets for Gen Z-minded marketers. Citing research that 86% of B2C marketing executives in the US are prioritizing better ways to reach Gen Zs and Millennials, it adds that these audiences are spending their entertainment time on “nonpremium video and gaming environments,” with around 40% of young adults in the US and the UK saying that they’re on TikTok constantly.
In an effort to connect with Gen Z, Chips Ahoy already moved most of its linear TV budget to social and digital channels. TikTok, not TV and CTV, will dominate media budgets for marketers trying to reach this influential audience.
AI dominates Forrester’s list of trends and predictions. The real question is will GenAI and AI in general live up to the massive amount of hype we’ve seen to date?
Unsurprisingly, Forrester’s answer is yes. 2024 will be another banner year for AI overall, it states, ushering in a new era of “intentional AI,” where gimmicks and technical experimentation give way to more focused and strategic initiatives.
This trend is already underway. Forrester says 67% of enterprises are embedding GenAI into their overall AI strategy.
“Every organization right now is asking themselves, how they can use AI to improve their business,” said senior analyst Andrew Hewitt in an accompanying podcast, “Predictions 2024: Where Will AI Go Next?”
“And I think the overall consensus is, that they want to be able to use AI in a way that’s very personalized to their specific business and allows them to drive outcomes for their business.”
While that is certainly the end goal for many organizations, Forrester also found that many are struggling to put that together. As a result of that, what it is starting to see that organizations are having to contend with a new concept, Bring your own AI (BYOAI). In other words, employees bring their own consumer versions of AI tools.
“Of course, the most popular is ChatGPT and using that in different parts of their work,” Hewitt said. “What’s ultimately happening is that while organizations are striving to provide that kind of corporate sanctioned AI capability or develop that strategy, they’re not able to do it fast enough. That brings in the consumer oriented services that we believe many employees are going to be using over the course of the next year.”
Forrester’s formal prediction is that in 2024 60% of workers will use their own AI to perform their job and tasks. That’s more than half of the workforce using some form of AI to do a substantive part of their job.
Added analyst Kim Herrington, “That could be a generative AI system or could also mean AI that’s embedded in an application that maybe isn’t sanctioned by the business or that that employee owns themselves. We predict that 60% of workers are going to actually bring their own AI similar to the moving around, bring your own device, and use that for their work over the next 12 months.”
Herrington went on to say that employees will use those tools to automate big portions of their job, whether that’s content generation or summarization of articles or decision support or using it in a sales scenario.
“We foresee that organizations or employees specifically, are going to be successful and improving their productivity over the next 12 months. At the same time, it also introduces a lot of risks from a legal perspective, from a security perspective and from a privacy perspective.
“We’re probably going to see some big blunders from organizations in terms of unsanctioned use of AI buildings by the workforce, leading to some negative business impact, whether that’s a privacy violation, a security infringement, or legal jeopardy.”
Ultimately, that will end up driving organizations towards corporate sanctioned AI capabilities, Forrester argues. “While they’ll definitely build their own policies to manage ‘bring your own AI’ in the workplace, ultimately, it’s going to push them towards developing and delivering a corporate sanctioned version of AI that the workforce can use without jeopardizing security management or legality of the overall AI system itself,” Herrington added.
The rapidly growing and widespread use of AI in the workplace also require new training programs for professionals. Forrester predicts that 60% of data and analytics professionals 60% will get prompt engineering training in 2024. Prompt engineering is the practice of creating and refining instructions given to an AI model to get the desired responses.
Yet only 33% of US and UK Data and Analytics employees say that their organizations currently provide training on how best to communicate with chatbots or intelligent agents via prompts.
“In order to capitalize on AI, not only are businesses going to have to fund AI developments, but they’re also going to have to budget for AI search, training and creation of those different prompts,” said Harrington, as well as budget for data communicators to “evangelize the AI tooling” and act as analytics translators to help people adopt those new technologies.
Deepfakes Dominate Misinformation
With national elections coming up in the US and around the world in 2024, there’s increased concern about generative AI’s role in influencing elections. Deepfake ads will become the primary accelerant for election misinformation, said senior analyst Mo Allibhai, although he noted there was a high bar for bad actors to extend their reach.
“Setting up a publisher website, and then being able to actually be admitted to an Ad Exchange and then drawing audiences to that publisher website.. It’s pretty expensive as an endeavor.”
The good news is that Forrester think that generative AI-created disinformation will fail to alter the course of any national elections because the real challenge lies in the distribution of disinformation and not the creation of it.
Interested in how artificial intelligence will impact technology, business, and creativity? (How can you not be?) Ride the wave into the future of Media & Entertainment, where curiosity meets innovation meets storytelling, with NAB Amplify’s dedicated resource exploring the transformative force of AI. Dive into explainers that demystify complex concepts, discover real-world applications in Hollywood, and glimpse the road ahead. Aimed at industry professionals working at all stages of the content lifecycle, these resources are your gateway to understanding how AI is reshaping the entertainment landscape. Join us, and let’s Amplify the conversation!
Deloitte’s annual Technology, Media and Telecoms report highlights generative AI, sustainability and monetization in its preview of 2024.
January 3, 2024
Posted
December 7, 2023
Power to the People: A Social History of the Internet
TL;DR
“Extremely Online: The Untold Story of Fame, Influence, and Power on the Internet,” rewrites the history of social media from the point of view of content creators.
Author and Washington Post columnist Taylor Lorenz claims it is content creators who really hold power over Silicon Valley, yet are not recognized nor sufficiently rewarded for it.
Lorenz also explains how social media platforms marginalize women, people of color and other communities.
Lack of accountability in changing this monopoly won’t come about while the US government is invested in the status quo of Google and Meta, she claims.
The idea that the history of the internet is as significant, maybe more significant, told from the lens of the users and creators rather than the CEOs in Silicon Valley is the crux of a new book by Washington Post columnist Taylor Lorenz.
In Extremely Online: The Untold Story of Fame, Influence, and Power on the Internet, Lorenz says it is users and creators who hold the power when it comes to social media.
“I think it’s so underwritten and for the majority of the rise of social media, there weren’t reporters covering it. It’s kind of crazy to describe how small this beat remains. At least in 2020, there were more reporters covering Facebook alone as a company than all of internet culture.”
Traditional media have been notoriously blind to shifts in social media, she argues, “and refuse to adapt to them.”
Most people think of the rise of social media as dominated by “Silicon Valley men that really saw the future before anyone else and that’s not true,” she says. “Actually, many times they had absolutely no idea what they were doing or they were sort of saved by specific communities that adopted their products.”
Lorenz argues that “social products” aren’t like other tech products in the sense that the user base is the product. The users have a massive amount of influence over the success of a product because at the end of the day the product is the social network platform that users themselves cultivate.
Put another way, the true value of Facebook or Instagram or — dare we say — Twitter/X is the people who use it.
She maintains that users constantly exert their power on the platform’s erstwhile overseers.
“Look at things like the @ sign or the hashtag or the retweet,” she explains. “These were user-driven behaviors that the product then integrated. YouTube itself started as a dating site, but it was the way that users uploaded videos that the company actually leaned into and sort of adapted to and became this widely successful video sharing platform.”
In the book, Lorenz divides social media into two camps: Entertainment model and Facebook model.
“In the beginning, there was this entertainment driven model of social media, which was like people using it for fame and attention and to build audiences,” she elaborates. “This was very much the MySpace model. The Facebook model of social media was all about a walled garden. It capped your
friends list at 5,000 people, because they didn’t want people using it for fame. It was more about manifesting your IRL connections on the internet through this highly curated experience.”
The Facebook model acted as a bridge to attract people online but, ultimately, the entertainment model of social media has won.
“This is where we have these private spaces for group chats and, direct messaging and things like Snapchat. And then you have the public facing side of things, which is, TikTok, basically. If you go back and read MySpace’s marketing materials and compare it to how TikTok markets itself today, they’re shockingly similar.”
Asked how a more equitable and powerful creator economy could be built, Lorenz prescribes first taking the content creator industry seriously.
“[We] need to recognize it as labor and cover it as a labor story. People still think influencing is mostly women taking selfies online. It’s this trivialization of women’s work and of a very female dominated industry. I mean, women built the creator economy. They’re never credited with it. They never get the respect they deserve,” she says.
“If you look at the most highly paged content creators, it’s almost all men. And not only is it all men, it’s mostly white men, it’s almost no people of color. LGBTQ people also pioneered this industry and have largely been pushed out of certain areas of it.”
Quizzed on how TikTok treats LGBTQ creators, Lorenz says all major social media platforms behave the same.
“It’s not like TikTok is uniquely censoring LGBTQ people. Look at YouTube. Notoriously de-platformed LGBTQ creators, restricts their reach, says that their content isn’t family-friendly enough. Same thing with Twitch,” she says.
“Same thing for women. Same thing for people of color. All of these marginalized groups struggle on these social platforms because their content is deemed not brand safe. They get mass reported. Nobody cares about their struggles on YouTube or Instagram seemingly. They care about making TikTok the villain because it’s easier to make TikTok the villain than deal with the systemic issues inherent in our landscape.”
The platforms themselves need far greater accountability to stop that happening. “It’s ridiculous, the amount of power that they have,” she says.
Unfortunately, the social tech landscape right now is dominated by Meta, Google, and TikTok (ByteDance) with “no way for smaller apps that are more responsible to compete and to grow audiences at the scale that Meta and Google have.”
Lorenz also calls out the “intense lobbying” power that US social media giants have that “squash the competition so effectively.”
She pins the blame on lack of oversight on the US government. “[Members of] Congress quite literally have stock in these companies. They want these companies to succeed and they’ve refused oversight. It’s very anti-competitive. Now of course, look at them freak out about TikTok. Not because there’s any inherent problem with TikTok, really. I mean, they pretend that it’s about Chinese ownership. Really, it’s about questioning Facebook and Google’s supremacy in this country.”
“Extremely Online” is WaPo columnist Taylor Lorenz’s creator-centric chronicle of social media and the birth of the creator economy.
January 8, 2024
Posted
December 4, 2023
Evan Shapiro: We May Not “Get” Data Ownership, But We Know It Matters
TL;DR
Evan Shapiro shares the results of a recent PCH Research study assessing American attitudes about data and privacy. The results are a bit surprising (and maybe even depressing).
The good news is that Americans believe their data is personal and that they should be responsible for maintaining their own privacy. But belief isn’t translating into knowledge or confidence in their choices.
Businesses need to account for these attitudes and help consumers, not take advantage of them. Americans say they are willing to punish companies that abuse their data-gathering ordon’t respect privacy.
“Americans understand way less about how …personal data is collected and used than responsible adult humans should,” Evan Shapiro argues.
He grounds this opinion in a survey of 45,000 Americans (aged 25 and up) that he conducted with Publishers Clearing House, in conjunction with Syracuse University’s Daniela Molta and NYU’s Tiffany Johnson (founder of Xente Data).
Shapiro describes the findings as a “study of Americans’ knowledge, awareness, and understanding of their data privacy and how companies and organizations use the information we volunteer (often unwittingly)” in a recent Substack post.
Unfortunately, Shapiro says, “[T]he answers Americans gave to these questions are both surprising and alarming.” The study concludes, “Americans who are not data savvy are unlikely to see the value in their data and the ways to guard it.”
Some Topline Findings
First, it’s notable that respondents indicated that they considered all of their data to be personal, although they ranked social media, workout behaviors, political information and other readily available information as “less” personal.
Second, Shapiro and co. found “86% of Americans A25+ are concerned about the privacy and security of personal information and data. It ranks just below the current cost of living and just above the state of the economy.”
Interestingly, “[t]he majority of Americans A25+ consider themselves to be private people (82%) who are cautious about security (77%), yet only 51% feel informed about how their personal data is being used by companies, government, and social entities,” according to the PCH study. Yikes.
For example, consumers are not confident about their cookie decisions, despite being confronted with them daily. “Concern for data privacy and security, tied with increased data literacy, will place more pressure on companies to go above and beyond to protect people’s data,” Shapiro predicts.
What This Means for Businesses
Despite evidence of data illiteracy, consumers increasingly believe that they are responsible for their own data; 87% of respondents agreed they should take ownership of their data privacy.
However, “we know the barriers to managing individual data are high, especially for those who don’t have a strong foundation of data and digital knowledge,” Shapiro writes.
Despite the pervasiveness of the data economy, Americans also seem to be less than thrilled about data sharing. In fact, 38% of respondents said they’d prefer to never share their data, while 2% thought they’d trade it in exchange for knowing about new goods or services.
One-third who’d be willing to share their data said they would want to control access, and it was slightly more important than monetary compensation (30%) and ranked significantly above altruism (24%).
But that doesn’t mean respondents don’t value their data. “Consumers view their DNA (50%), Biometric (47%) and Banking (44%) information as the top three most valuable categories of data, believing that data is worth $500+. While these top three categories are considered the most valuable to consumers, they are also the most widely sought after by public and private organizations (think Ancestry, Clear Travel, and every credit card and banking institution).”
And this doesn’t mean Americans expect businesses to eschew corporate responsibility.
If anything, it may indicate a lack of trust and an understanding that a two- or three-pronged (adding in government oversight) approach will be necessary to correct the path we’ve been on for the past three decades. In fact, the study found 64% “believe that both government and businesses should be responsible for data privacy and security.”
Additionally, past behavior shows that consumers will change their behavior if they believe companies have been irresponsibly using their data. Both Meta (formerly Facebook) and Wells Fargo saw significant fallout from privacy and data misuse scandals.
What Can, Will (and Should) Be Done
Shapiro writes, “[S]urvey respondents indicated they are willing to take more action against businesses they don’t trust, leading to a long-term decline for companies who violate consumer trust.”
Also, the study notes, “the change in cookie tracking and targeting will continue to make it difficult for quality advertisers to find their audiences. This signals a need to change their data approach, which opens the door for ‘Permission Marketing’ and gives businesses the chance to redefine how they think of loyal consumers and stops wasting marketing dollars on consumers who have indicated they aren’t interested.”
Shapiro notes that there is a current patchwork of data privacy legislation (and proposed legislation) in the U.S., creating a difficult environment for businesses and consumers alike. The study advocates for federal action to both regulate data collection and to educate consumers about their rights and responsibilities.
Some red flags: About two-thirds of respondents said they were unsure or chose an incorrect answer when asked whether “Companies with ethical standards and data privacy policies do not sell my data” and “I can stop advertisers and marketers from collecting and using my personal data to target ads online to me”. Also, Americans in the 44-65 year-old cohort were more likely to be unsure or incorrect about these answers, indicating that there is an information gap to be filled.
But the good news is that “there is a desire to learn more and do more” to protect consumer data; in fact, “48% of survey respondents agree with the statement ‘I am willing to learn what kind of data is being collected about me’.”