Creating the “Disconcerting” Color for “The Sympathizer”
Jon Silberg
“All wars are fought twice. The first time on the battlefield the second time in memory.”
So states a title card at the opening of HBO miniseries “The Sympathizer,” a show in which much of what we see is somewhat askew.
This should not be surprising since the story is based on the novel by Viet Thanh Nguyen, which is told from the POV of a Vietnamese double agent —working for US client state South Vietnam to gather intelligence to report to the Communist North.
As the story begins, the character, referred to as the Captain (Hoa Xuande), plans to remain in Vietnam after the fall of Saigon but is instead told by his beloved Communist party that as a bi-racial, bi-lingual man, who’d previously studied in the US, he’d be of more value continuing his spying efforts in the America for the victorious Vietnamese Communist government.
The seven-part series, created by renowned Korean filmmaker Park Chan-wook (Decision to Leave) and actor/filmmaker Don McKellar (Through a Black Spruce), recounts events as remembered by the Captain — someone who’s endured so much indoctrination and counter-indoctrination and seen so much horror in the name of conflicting ideologies, that he cannot be expected to be a reliable narrator. Directors Fernando Meirelles (The Two Popes) and Marc Munden (The Secret Garden) handled the final episodes, which haven’t yet run.
An early clue that the show is presenting events through this unique prism comes when we see early on in Episode One that a number of different American characters are all portrayed by Robert Downey, Jr. (who also served as an executive producer on the series). This is how the Captain perceives these Americans. What we’re seeing is in no way pretending to represent objective reality.
Such disconcerting choices permeate the entire series, from the cinematography by frequent Park collaborator Kim Ji-young (Barry Ackroyd of The Big Short and Bombshell finished out the season) to the casting, production design and the color grade, which was completed at Company 3 New York by Senior Colorist Matt Osborne, overseen by Senior Colorist Tom Poole.
Prior to shooting, Poole, Park and Kim set an initial look in a show LUT designed to bring the ARRI Alexa35-shot imagery into the type of palette called for by their vision — something neither super modern nor nostalgic, and, says Osborne, “absolutely not anything that could be called a ‘trendy new look!'”
As principal photography commenced, Company 3’s dailies department applied the LUT with the usual tweaks to create the dailies for picture editorial to work on.
As portions of the edited project started coming into Company 3, Poole collaborated early on to help fine-tune the series’ look before turning it over to Osborne, who subsequently spent quite a lot of time with Park and Kim evolving every shot within DaVinci Resolve until it reflected the specific, somewhat uncomfortable feeling the filmmakers were after.
A fan of Park’s work, and one who has thoroughly enjoyed quite a lot of Korean cinema over the past decade, Osborne was delighted to have the opportunity to work on this series. Of Park and Kim, the colorist observes, “They are so very focused and extremely color driven.
“We did quite a lot of manipulation of the images, making them feel vintage and retro using a number of techniques within Resolve and pushing certain colors in directions that aren’t associated with modern filmmaking.”
Cinematographers have raved about the new ARRI Alexa 35’s handling of highlight information and Osborne has plenty of praise for its sensor, too. “Things like sky detail and other highlights that might just [clip] on other cameras,” he says, “we had the ability to bring it all back if we wanted, which is definitely useful as we fine-tuned everything.”
That said, the filmmakers wanted to get away from any clean, digital modern look as much as possible, so Osborne worked with tools within his color corrector to “distress” the images, using film grain plug-ins in various strengths, techniques of blurring and then sharpening the image to make it feel “rough” and halation in the highlights to bring out a filmic feel.
The palette itself, Osborne recounts, was the result of quite a lot of subtle iterations to get it to where all collaborators felt it evoked the Captain’s internal conflict.
“When the Captain’s in his office,” Osborne notes, “there are greenish lights illuminating everything.” Rather than pulling back on this uncorrected fluorescent look, Osborne explains, “we emphasized the green tones, turning them into this uneasy kind of color that felt a little bit sickly.”
(This intention clearly landed: As writer Jia Tolentino says in her New Yorker profile of Park describing a key scene in the first episode: “A bus, glowing with nauseous light, races toward the plane, carrying evacuees, who then sprint across a runway seen from above.”)
Kim didn’t want skies to be typical blue or skin tones to always be natural. “He wanted everything to be a little bit off center,” Osborne notes. “I think to make the audience feel a little bit more uncomfortable.”
Portions going even further back in the Captain’s life received a different treatment altogether. “For scenes of his childhood,” the colorist explains, “we would feed these more bluey/purply tones in there in a stylistic way.” As the grade evolved, they came up with the idea for certain shots if there was a lot of green in the image, they might counterbalance it with [opposite on the color wheel] magenta in the shadows.”
The first episode shows the fall of Saigon and leads into the next part of the Captain’s adventure after he arrives in California with family members. Regarding color grading scenes of California, Osborne says, “typically you go bright, warm and sunny. Instead, we brought out cooler shadows, making the overall style a bit darker. It’s not your typical kind of Californian look. America for these characters is a very strange world — it’s a little bit alien for them and we wanted to reflect that in the grade.”
Further, Osborne played into what he sees as a bit of a film noir element common to much Park’s work and Kim’s cinematography. “I think there’s a very emotional aspect to the way that he portrays his characters as flawed,” he suggests.
“There’s a deep and mysterious aspect to certain characters. We did do a lot with the Captain’s face where it would be somewhat obstructed by shadow, and we would enhance that even more. It helped subtly bring out this idea that the character is two-faced in a way.
“The more I worked on the show, the more I understood,” Osborne adds. “The Captain is narrating the whole thing as a confession and what we’re seeing is his interpretation of the confession. There are obviously some things he’s hiding in the confession and some things that he’s forgotten.”
Many critics have suggested that The Sympathizer succeeds in bringing all the elements together to achieve something truly special. Critic Nandini Balial of RogerEbert.com penned a review stating Park’s direction in the episodes he helmed, are “among his finest work. It isn’t just that his fluid direction makes the episodes feel like one long, gliding film rather than episodic television. It’s that he forges an extraordinary relationship with the material that creates a luminous visual texture; the viewer can practically feel the sweat of Saigon emanating from the screen, the viscosity of rich red blood blasting out of a skull, the lissome slipperiness of pho noodles slurping into happy mouths.”
“Everything I’ve seen [Park] do has this kind of intensity,” says Osborne, “a dark twist on reality with complex themes and characters.” Adds the colorist of an impressive list of major, award-winning commercials, “This is definitely one of the most creative long form pieces I’ve had the opportunity to grade.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Cancelling streaming services is no longer niche or occasional. Churn has gone mainstream and premium SVODs are going to have to employ new tactics to compete for a share of the household wallet.
Finished watching The Bear? Ditch Hulu. Want to watch Fallout? Sign up for Amazon Prime Video. Time for Slow Horses Season 3? Then cancel Amazon (having already binged Fallout) and get Apple for at least a couple of weeks. More and more of us are doing this, partly because price inflation has exhausted the amount people are willing to spend on stacking SVODs, most of whose content they don’t actually watch.
It’s also because the SVOD system of no-contract, one-month viewing — so important in kick-starting the streaming business — makes it so easy to do.
According to data from Antenna, at the end of 2023 nearly one-in-four streaming US consumers qualified as serial churners — individuals who have canceled three or more Premium SVOD services in the past two years. That’s an increase of 42% YoY.
Antenna even identifies a group of “super heavy serial churners,” those who make seven or more cancellations within the past two years, and found that they constituted 19% of subscribers in 2023.
More data: serial churners were responsible for 56.5 million cancellations in 2023, up a whopping +54.6% year-on-year, while cancellations by non-serial churners increased 18.5% to 82.8 million in the same period.
While consumers value flexibility, the implications could be significant for the major media companies, especially as it seems likely this behavior will become even more common.
One option outlined by John Koblin in The New York Times is to bring back some element of the cable bundle by selling streaming services together.
Executives believe consumers would be less inclined to cancel a package that offered services from multiple companies. Disney, for instance, is bundling Disney+, Hulu and ESPN+ into one package and, later this year, will launch a sports streamer pooled with Fox and Warner Bros. Discovery.
Another tactic is to promote “coming soon” content prominently on the home page. For instance, Apple TV+ is teasing Dark Matter, a science-fiction series that comes out in its app in May.
Peacock promoted a special offer to deter new subscribers from cancelling by offering a deal to sign up for a full year at a discount.
According to Antenna research, cancellation rates for those who did sign up did not drop off a cliff a month later, but instead were close to average.
Netflix appears immune, according to Antenna data. Or rather; it is the service most likely to be part of household bundles with every competitor part of a revolving carousel that consumers pick and mix according to the latest show to land.
Without a predictable revenue stream, it is harder for streamers to invest in new content, causing them to cut production and deliver fewer stand out new releases to market, in a vicious cycle that will gather pace unless nothing changes.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Available now to download, “A Beginner’s Guide to FAST” will be presented at NAB Show by GRG Global’s VP of Media Research, Gavin Bridge.
April 25, 2024
Posted
April 25, 2024
Podcast Listening is Hitting New Highs… So Who’s Listening?
TL;DR
Podcast reach is increasing across all age groups, according to an annual survey from Edison Research, with female listeners driving growth.
Nearly half of the adult population in the US has listened to a podcast in the last month, up 12% year-over-year.
Thirty-four percent have listened to a podcast in the last week, up 10% year-over-year. Online audio listening has also hit record highs, the report finds.
The number of Americans consuming podcasts has reached record highs and growth is being driven by female listeners, according to “The Infinite Dial,” an annual survey from Edison Research with support from Audacy, Cumulus Media, and SiriusXM Media. The study is based on a national survey in January 2024 of 1,086 individuals age 12 and older.
Nearly half of the adult population in the US has listened to a podcast in the last month, up 12% year-over-year. Thirty-four percent have listened to a podcast in the last week, up 10% year-over-year.
Increases in the number of monthly and weekly podcast listeners are seen across all age groups, but that growth is driven by large increases among women.
The study found that 45% of women have listened to a podcast in the last month, up from 39% in 2023, an increase of 15%. Thirty-two percent of women have listened to a podcast in the last week, up from 27% in 2023, an increase of 19%.
“Listening levels are up markedly despite changes in how downloads are being delivered and counted.” said Edison Research VP Megan Lazovick.
Online audio listening has also hit record highs, according to Edison Research. More than three quarters of Americans have listened to online audio in the last month, an estimated 218 million people. Ninety per cent of those aged 12-34 and 85% of those age 35-54 have listened to online audio in the last month, it found.
Other findings include that 70% of those age 18+ who have driven or ridden in a car in the last month currently listen to radio as an audio source in their primary car; 55% listen to online audio and 32% listen to podcasts.
AM/FM radio is still popular with 60% of those polled having a traditional set at home.
An outlier in this audio report is that Twitter/X usage has seen a sharp 30% decline in use following Elon Musk’s acquisition.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Steve Raizes, executive VP of podcasting and audio at Paramount Global, says the death of podcasting has been greatly exaggerated.
April 25, 2024
Posted
April 25, 2024
Embrace the Cloud! 5 Steps to Cloud-Based Media Production
Each decade there is a step change in the way people work with media, from tape to digital, dedicated editing hardware to software that runs on any machine. Each of these changes take time but eventually a new way of working emerges as a result.
Right now, we are in the middle of the next wave of change: embracing the power of the cloud. The important things remain the same with this new paradigm. Editors still have complete creative control of their media workflow, and content is still carefully preserved. But high-performing cloud storage boosts productivity, allows more people to work at once, and fits in with our ambitions to work from anywhere.
Step 1: Cloud-first asset management
In our remote-work world, the first step of this new way of working is that all assets, video and audio, go straight to the cloud. Ingest from wherever the material is created. Host it in the cloud so that anyone (with the right permissions) can access it from wherever is convenient for them. The director on location and the producer back in headquarters can each add comments on their preferred takes, which guide the editor working at home. EditShare’s FLOW media management allows creative teams to work together from anywhere in the world. Plus, with the recent MediaSilo integration, collaboration extends beyond creative teams to wider stakeholders that might need to review, comment or approve the media. This includes side-by-side review capabilities, @user mentions to tag people to comments and visible, forensic, image and document watermarking to deter and detect piracy.
Step 2: Do not download
We all have enough experience of the cloud now to know that egress charges make downloading content potentially expensive. Also, as soon as your video content leaves the cloud it once again becomes as uncontrolled as sending people home with portable drives. So step 2 simply says do not download: move applications to the media not the other way around.
EditShare software readily hosts media production tools like the popular software editors in the cloud. This means you can use remote desktop access, such as PCoverIP, wherever you are and use the infinite cloud resources to power through the edit. However complicated it is, however many layers of video, graphics and effects you must build up.
Producers will be able to look in at any time to check on progress and leave notes for team members in real time. Or you can create viewing copies when you need comments, and simply send links to those whose input you need. Editors can respond to suggestions in their own time, rather than having to drop their work to service an attended session. That will boost productivity and maybe even relieve tensions.
Step 3: Distribution
Having signed off the final version, step 3 – distribution – now becomes a publish function. To deliver a completed commercial or programme to a broadcaster, you simply activate a link. For programmes, you could even attach that link to the commercial system for programs so the broadcaster can only get the full-resolution content once they have paid for it.
Editshare’s Screeners.com delivers an elegant, out of the box press review solution to deliver prestigious pre-release content for customers across the world. View on the small screen of your laptop or on your home theater with the Apple TV app. Security is at the heart of Screeners.com with SOC 2-Type II certification, visible and forensic watermarking, passwordless log-in and anomalous activity reporting.
Step 4: Archiving to long-term storage
Long-term storage is simple when the production and post are complete, and the deliverables have been generated and accepted. You just move it from the active part of your cloud store to deep archive functionality in the same cloud – step 4.
The great benefit of this is that maintenance of the archive becomes someone else’s problem. Anyone with an LTO archive knows that you have to check the tapes on a regular basis, and every few years migrate to the currently supported formats. It is the sort of job that, despite being vital, has no immediately obvious benefits so finance directors love to look hard and sharp.
Step 5: Flexibility to create new content from your cloud archive
Step 5 is the ability to rework something, or create new content from existing assets. The cloud archive also preserves all the elements in one place, from scripts and EDLs to visual effects and camera raw originals. If, in the future, you need to rework something for any reason, pull the whole package out of the archive into the working storage area of your cloud.
This idea of archiving everything so it is easy to revisit a project or create new assets out of existing materials is obviously a huge benefit. It depends on all of the production and post materials developed, implemented and stored in the cloud. It all depends upon taking step 1: ingest straight to the cloud and do it all there.
This is not changing what we do. It still depends entirely on skilled, experienced, talented and dedicated people doing the jobs they are good at. But rethinking the underlying technology makes it a lot simpler for them to be their best.
Now if these steps seem familiar, they are the first five principles of the MovieLabs 2030 vision. EditShare embraces this vision and is providing high-quality tools to help our customers from all industries take advantage of a more powerful way of telling their stories with video.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
With “Cowgirls on the Moon,” the Workflow’s in the Cloud
TL;DR
A high-concept sci-fi movie trailer made using modern cloud computing showcases a raft of new filmmaking technologies from virtual production and cloud rendering to generative AI.
Ron Ames, producer of Amazon series “The Lord of the Rings: The Rings of Power,” and Katrina King, global strategy leader for content production at AWS, discussed the making of ‘Cowgirls on the Moon” at the 2024 NAB Show.
The principal generative AI tool was Cuebric, which was used to generate assets and import them into Unreal Engine, and to automate certain technical aspects of the production.
Many producers remain concerned about putting their project in the cloud, which case studies like this aim to challenge and reassure.
It began as a joke, but putting cowgirls on the moon is a serious attempt to showcase what is possible by using a raft of new filmmaking technologies from virtual production and cloud rendering to generative AI.
Unveiled and demonstrated at NAB Show, the faux movie trailer for Cowgirls on the Moon is a goofy but high-concept challenge led by AWS that conforms to elements of MovieLabs 2030 Vision.
“It started off as a joke,” Katrina King, global strategy leader for content production at AWS, explained. “Let’s do something ridiculously out there that’s really going to force us to lead into modern cloud computing and generative AI. And I said something like ‘cowgirls on the moon.’ It was a joke, but nobody came up with a better idea so that’s what we went with.”
The aim was to demonstrate the power of three technologies working in tandem: generative AI-assisted virtual production, cloud rendering and VFX with the use of AWS Deadline, and holistic production in the cloud.
“At AWS, we believe very strongly in the responsible use of generative AI,” King continued. “So we used applications that allow artists to work more efficiently and to offload the mundane technical aspects.”
For instance, they used text-to-video generator Runway for concept art and storyboards, another AI tool for facial recognition, and an enhanced speech tool included within Adobe Premiere. The latter tool completely rebuilt the dialogue track as if it had been recorded in an ADR session. “The amount of time that saved us up having not to go into an ADR session was incredible,” King said.
The principal generative AI tool was Cuebric, which was used to generate assets and import them into Unreal Engine and to automate certain technical aspects of the production. All of the backgrounds in the virtual production and animated in Unreal Engine were generated using Cuebric.
“Once Cuebric exports it we have these different layers which are then presented in Unreal, so that as we move the camera and the camera tracks on the volume we get parallax,” King explained.
Visual effects facility DNEG delivered 36 VFX shots for the production in just eight days. The whole project was essentially run as a full studio and render farm in the cloud.
Project producer Ron Ames talked up the benefits of the virtual production, such as being able to swap out entire infrastructure and multi-location collaboration.
“We first said, ‘We want these machines to be Linux.’ But then we changed our minds. ‘Now we want them to be Windows.’ Literally in minutes we had new machines up and running,” he said. “The ability to work quickly to collaborate, to tear down walls. We had groups working in Vancouver, in London, LA, Boston, Idaho, Switzerland, Turkey, Tucson, Netherlands [on the project all linked to assets by cloud].”
Ames previously used extensive AWS workflows as producer of Amazon series The Lord of the Rings: The Rings of Power. He thinks other producers remain unconvinced about choosing to put their next project into the cloud.
“Petrified would be the word, not reluctant. The notion that we’ve done it before this way, or we have investments in a certain infrastructure, is one of the impediments to moving forward,” Ames said.
“On Rings of Power, the great good fortune we had was a team of producers and at AWS supporting us to try new stuff and if it doesn’t work, we’re not going to give up, we’re going to make it work and make it work in a way that actually has benefits. Once we saw the efficiencies, the creative possibilities, and truly the collaborative power of breaking down silos, walls, traditional ways we’ve been working, we realized that this had a great value.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
There’s an old tech industry joke that “the cloud” is a fancy way of saying “somebody else’s computer.” That’s a bit of an oversimplification since cloud computing services are a lot more involved than just providing access to a server someone else owns.
But the fact remains that the primary attribute of cloud computing is accessing computing resources — software applications, servers, data storage, development tools, and networking functionalities remotely over the internet.
Increasingly that means everyday post-production processes and crafts like editing too. Like much of post-production activity, the real shift to cloud came with COVID. If productions were to continue behind closed doors then remote and collaborative ways to continue the job had to be found.
While many facility managers and editors found those ad hoc attempts at the start of 2020 to be just about workable, the way the technology was proven to work opened up people’s minds to the benefits of more permanent cloud-based editing.
Today, at the very least, hybrid work-office scenarios are common, with cloud-based workflows no longer considered unusual across all genres ranging from live news and sports to feature animation, scripted TV and documentaries.
In a series of primers (ostensibly to promote its cloud storage), LucidLink explains cloud video editing and outlines the benefits it can offer.
Much of what the company has to say will be familiar to industry pros, but there’s a no-nonsense clarity for anyone unsure.
Cloud video editing refers to workflows that leverage the cloud rather than on-premise infrastructure. Editors can share their data with the complete toolset of a desktop-based NLE such as Adobe Premiere, Avid or DaVinci Resolve. The key difference is that the data itself is stored in the cloud, rather than on local devices. With the right software, cloud-based video editing can also include tools installed on virtual machines that perform parts of an editing workflow.
One of the chief benefits of working this way is remote collaboration. Since cloud-based systems and storage are inherently accessible from anywhere in the world, this enables both hybrid and fully remote workflows for editing teams.
Configured correctly (and the article doesn’t particularly delve into the cost of cloud storage and data transfer which vary greatly depending on facility needs), cloud can save time and cash.
“Although the cloud offers clear advantages when it comes to smaller files (like low-res video proxies), until recently handling large files was an unsolved challenge for cloud video editors due to lengthy upload and download times,” LucidLink notes, before offering its tech as a solution.
There’s also a look at the merits of cloud versus on-premise set-ups with files residing in a SAN or NAS system within a facility.
This latter approach, says the vendor, “requires copying large files to hard drives or using file transfer services if collaboration requires working with freelance talent in other locations other than at the facility itself.
“Even when working with large amounts of raw video data, editors often need to search, analyze and tag files, preferably in real time. The larger the file, the longer it takes to download, upload, render, or share. Beyond the costly hardware investment, these systems still don’t solve the problem of waiting for files to download or distribute.”
However, it’s not usually a zero sum game. Most facilities at this moment in time prefer to retain one foot in both camps, in part as a safety net for data loss.
There are of course lots of choices when it comes to storage and the right strategy is vital for any production, says LucidLink.
“On-prem SAN and NAS systems can be very performant, but those benefits only exist in one location: a facility. The need to collaborate anywhere however is not addressed by these legacy approaches. This is where a cloud-based approach comes in.”
As we have seen from the recent NAB Show, more and more vendors are offering cloud based workflows. These increasingly start from the camera, where proxies are uploaded directly via the internet to some form or media management platform, and from which authenticated users anywhere can download or stream files to work from.
In a few years, looking back at the heavy duty power hungry monoliths of Silicon Graphics machines, Quantel boxes, or Autodesk hardware, we will wonder just how we ever worked without the internet.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
From Blackmagic, Frame.io, EditShare, Backblaze and more, see all the latest developments in cloud workflows to emerge from NAB Show 2024.
April 25, 2024
Posted
April 24, 2024
It’s All Happening in the Cloud, Baby: New Camera-to-Cloud, MAM and Cloud Storage Workflows
TL;DR
Here’s a rundown of the latest developments in cloud workflows to emerge from the 2024 NAB Show, including a ground-up redesign from Frame.io, 48TB hardware to store 12K dailies from Blackmagic Design, and a way to move media from Amove’s desktop drive into the Storj cloud platform.
Cloud storage specialist Backblaze is opening up its technology as a white label to third-party vendors and other companies.
Wasabi Technologies is adding AI-powered auto-tagging and multilingual speech-to-text transcription to its cloud storage.
Camera-to-cloud workflows accelerate the creative process. By shrinking the capture-to-edit timeframe, editors can begin working on media instantly instead of waiting for hard drives or delayed file transfers.
Proxy generation of original camera files to H.264 and ProRes is one of the most used features of Studio Network Solutions’ EVO Suite, and with the latest updates this process is now faster using and also now supports the latest RED and ARRI cameras.
Storage and media management specialist EditShare has teamed with Atomos to bring camera to cloud workflows from the latter’s camera mounted monitor-recorders into its collaboration platform MediaSilo via the cloud. After pairing your Atomos device via HDMI or SDI to the Atomos Cloud and adding MediaSilo as your destination, users can upload proxy files as they record.
It’s the latest such integration with Atomos products. “We’ve always considered ourselves to be a neutral ‘gateway’ to a wide selection of secure destinations for our customers’ content,” said Atomos CEO Jeromy Young.
The grandfather of camera-to-cloud is Frame.io, which was first released in 2015 and is now being revamped by its new owners, Adobe. The fourth version of the asset management software is “more than just an update; it signifies a complete transformation of the product, marking the beginning of a new chapter in how modern teams structure and manage their creative workflows,” says Frame.io co-founder and VP Emery Wells.
Metadata is apparently key to the new Frame.io v4 experience. “Instead of relying solely on a rigid folder structure, you can now organize and view your media based on how you and your team work in a single, unified platform,” explains Wells.
Frame.io has introduced a flexible, saved view of assets called Collections that allow users to select, filter, group, and sort media using metadata. “Collections update in real time, reducing the time your team spends manually culling and organizing,” the company says. “They also allow you to organize (or reorganize) your files in unique combinations without needing to make duplicates of your assets, which conserves storage space. Collections is our answer to providing the kinds of flexible workflows that you’ve long asked for, without us dictating the approach, process, or template for how you work.”
Blackmagic Design has expanded its camera-to-cloud workflow by enabling its latest cameras, the Pyxis and Ursa Cine 12K, to transfer proxy (compressed) media direct to the cloud. Its overall distributed collaboration concept for post-production relies on a piece of on-premise hardware. Announced in 2022, the Blackmagic Cloud Store now has a new Max model with capacity for either 24TB or 48TB, the former costing $6,495. This increased capacity of network storage is designed to work with the sizeable files of the 12K Ursa.
One upside is media sync with DaVinci Resolve, meaning that the moment a film crew starts shooting, the camera media will sync within seconds so the post production team can start working.
According to CEO Grant Petty, Blackmagic Cloud Store is designed to handle the large media files used in film and television where multiple editors, colorists, audio engineers and VFX artists all work on the media at the same time. “It even handles massive 12K Blackmagic RAW digital film files,” he says. “Each user gets zero latency and they don’t need to store files on their local computer. That’s perfect for DaVinci Resolve.”
Users can install a local cache of media uploaded either to the Blackmagic Cloud website or services like Dropbox and Google Drive. BMD says this makes working faster because files are distributed globally to as many sites as customers need.
Media organizations are operating under tighter deadlines and narrower profit margins and are looking for ways to speed production workflows while controlling costs. This means tools for migrating to cloud that can manage costs between tiers of “hot” and “cold” storage, as well as between cloud and on-premise stores, are highly in demand.
EVO Suite from Studio Networks Solutions is a media asset management tool for remote collaborative working. The latest updates enable users to sync, replicate, and back up media from EVO to destinations that include NAS servers on-prem in a facility, FTP and SFTP sites, and a number of cloud storage platforms, including Box.com, Wasabi, Backblaze, Google, AWS and Azure.
A new bandwidth throttling function can control how much of EVO’s processing power is dedicated to automation (transcodes for example) and how much resource to make available to editors working concurrently on projects. A ShareBrowser integrates EVO Suite media management directly into Adobe Premiere Pro and DaVinci Resolve.
“So, when a producer wants to call out a sub-clip for the highlight reel, or leave a comment at a specific timecode marker, those details appear directly in the editor’s timeline in Resolve and Premiere,” the company says.
Sony has a bewildering array of cloud related services for live streams, production and post. The company describes Networked Live as an ecosystem to “enable production resources to be optimally connected, used, and shared” to facilitate remote production through on-premise and cloud solutions. It also markets a cloud gateway service called C3 Portal which can onboard live feeds from cellular bonding links via Teradek, LiveU, and TVU Networks and in combination with Dejero and Haivision.
Sony further offers Creators’ Cloud, which comprises a number of cloud-based platforms and apps including a new Multi-Cam monitoring function, as well as media management service Ci Media Cloud. A new integration with Marquis’s Medway provides automated ingest from Ci into Avid systems. Ci also has a new workflow to support automated VFX pulls.
Cloud object storage platform Storj and storage and file management developer Amove have joined forces to offer media customers a route from on-premise into hybrid and full cloud environments.
Amove provides a desktop drive that offers instant access to any cloud storage provider (AWS, Azure, Wasabi and 30 other providers are mentioned) into Storj. The Amove Drive allows users to mount their storage buckets directly from the desktop, “providing a true multi-cloud management tool that delivers immediate access to the largest files from any cloud or on-premise storage,” according to the companies.
Features include syncs between providers, file sharing, cloud to cloud migrations, backups, and AI powered deduplication. Patrick Kennedy, Amove CEO stated, “After years of development and testing over 45 services, we chose Storj as the ideal partner to deliver our users instant capacity from Amove Drives with incredible speed, cost efficiency and performance within an innovative architecture that supports remote streaming and access from anywhere.”
As CEO Gleb Budman explained, “Backblaze offers companies the ability to deliver the value of our cloud to their customers without the complexity of building their own high performance infrastructure. We are happy to take care of that part so that businesses can easily expand their platforms with affordable, reliable data storage.”
There are two ways customers can do this. Custom Domains lets businesses serve content to end-users from the web domain or URL of their choosing, “with no need for complex code,” and with Backblaze managing the heavy lifting of cloud storage on the back end.
Software developer Azion has chosen to go this route, with CEO Rafael Umann saying, “We can implement the security needed to serve data from Backblaze to end users from Azion’s Edge Platform, improving user experience.”
Organizations can also use an API to provision cloud storage accounts from Backblaze from within their own platform.
“Our customers produce thousands of hours of content daily and they need a place to store both their original and transcoded files,” says Murad Mordukhay, CEO at cloud video solutions provider Qencode. “The Backblaze API allows us to expand our cloud services and eliminate complexity for our customers — giving them time to focus on their business needs, while we focus on innovations that drive more value.”
Wasabi AiR applies AI-driven metadata, auto-tagging and multilingual speech-to-text transcription to cloud media storage. This is the result of the company’s acquisition in January of Curio AI. Video files uploaded to Wasabi AiR are immediately analyzed and compiled into a searchable metadata index.
“Why move to the cloud if you still can’t find anything?” said Wasabi co-founder and CEO David Friend. “Object storage without metadata is like a library without a catalog. Wasabi AiR works right out of the box and it’s as simple to use as popular search engines. For example, if it finds a face that it doesn’t recognize, it asks ‘Who is this?’ Using a simple UI, the user can train their own models. You can have tens of thousands of hours of video, and Wasabi AiR will take you right to the moment you are looking for.”
Wasabi claims this product “greatly reduces” the cost of metadata creation since customers pay only for the storage with no additional charge for use of the AI.
Dave McCarthy, research VP at analyst IDC, said, “Wasabi AiR represents a significant advancement in tackling the longstanding issue of managing extensive data archives, within a substantial market for intelligent media storage solutions.”
Akamai is now using NVIDIA GPUs to beef the encoding capabilities of its cloud-based service. The new GPUs are said to be 25x faster than traditional CPU-based encoding and transcoding methods, “which presents a significant advancement in the way streaming service providers address their typical workload challenges.” Use cases outlined by Akamai include transcoding live video streams, rendering 3D graphics for VR and AR content, and for training and inferencing generative AI.
If you broadcast, stream, or distribute live video in the cloud, chances are you’ve spent time building, testing, and securing your workflows. AWS has a new workflow monitor making it easier to do this while running AWS cloud services.
It displays the relationships between resources in a graphical signal map, so you can see which resources are in use and how they are connected.
AWS product marketing manager Dan Gehred says, “Once signal maps are created, you use the workflow monitor to create and apply alarm and notification templates to alert you when issues arise.”
OpenDrives unveiled its Atlas platform at the 2024 NAB Show, a new software-defined storage solution that integrates seamlessly with a broad range of third-party hardware. This versatility enables users to customize the system according to their unique creative workflows, facilitating efficient and high-speed collaboration. Designed for content creators, broadcasters, sports organizations, and technologists, Atlas adapts to specific data management needs, moving away from traditional fixed hardware/software configurations.
Designed to work seamlessly with various applications, data, and users, Atlas offers a highly performant, cost-efficient platform pre-tuned for content creation, automating internal data management and allowing users to focus more on creative output.
The platform includes an integrated remote editorial and post-production workflow, using a private in-place cloud model to power the DaVinci Resolve editing suite for users in multiple locations, without performance degradation. It also helps accelerate Adobe Premiere workflows, adapting dynamically to data workloads and optimizing performance. This allows editorial teams to work concurrently on the same master project, at scale, without interruption, locking and sharing projects at will.
Backed by OpenDrives’ commitment to continuous innovation, Atlas incorporates advanced technologies such as containerization, REST APIs, and automation. Its design not only maximizes the efficiency of third-party hardware but also ensures a sustainable ROI by continually adding new features and functionality without extra costs.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
As we have seen from the recent NAB Show, more and more vendors are offering cloud-based workflows. Here is primer to get you started.
April 24, 2024
Posted
April 24, 2024
AI Is Definitely Changing (But Not Destroying) Hollywood
TL;DR
Generative AI is creeping in behind the scenes of film and TV production, but is not yet good enough to auto-generate an entire feature from scratch.
Hollywood has been here before in the sense that any disruptive tech — from soundtracks to digital cameras — were co-opted to the goal of telling stories.
There’s a current backlash in town against using AI for anything more than ideation or to quicken some processes up — but that may change when the next generations of tools like Sora land on the market.
The current consensus appears to be that generative video is not yet a Hollywood-killer and perhaps never will be. While AI is creeping into production, it is doing so to augment certain workflows or make specific alterations with no sign of it being used to auto-generate entire feature films or push creatives out of a job. But it’s still the early days.
“It’s a fraught time because the messaging that’s out there is not being led by creators,” said producer Diana Williams, a former Lucasfilm executive now CEO and co-founder of Kinetic Energy Entertainment at the 2024 SXSW panel, “Visual (R)evolution: How AI is Impacting Creative Industries.”
Certainly, AI is a disruptive technology, but M&E of all industries should be used to taking tech change on board.
Julien Brami, a creative director and VFX supervisor at Zoic Studios, spoke on the panel with Williams, as Chris O’Falt reports at IndieWire. Brami said the common thread with each tech disruption is that filmmakers adopt new tools to tell stories. “I started understanding [with AI] that a computer can help me create way faster, iterate faster, and get there faster.”
Speed. That’s what you hear, over and over again, as the real benefit of Gen AI imaging, writes O’Falt who spoke to numerous filmmakers about the topic.
“Few see a viable path for Gen AI video to make its way to the movies we watch. Using AI is currently the equivalent of showing up on set in a MAGA hat.”
Finding actual artists who are willing to use AI tools with some kind of intention is tough, agrees Fast Company’s Ryan Broderick. Most major art-sharing platforms have faced tremendous user backlash for allowing AI art, and there’s even a new technology called Nightshade that artists are using to block their images from training generative AI.
Graphic designer and digital art pioneer Rob Sheridan tells Fast Company that the backlash against AI tech in Hollywood is directly caused by both tech companies and studios claiming that it will eventually be able to spit out a movie from a single prompt. Instead, Sheridan says it’s already obvious that AI technology will never work without people who know how to integrate it into existing forms of art, whether it’s a poster or a feature film.
“The thing that is hurting that progress — for this to kind of fold into the tool kit of creators seamlessly — is this obnoxious tech bubble shit that’s going on,” he says. “They’re trying to con a bunch of people with a lot of money to invest in this dream and presenting this very crass image to people of how eager these companies are, apparently, to just ditch all their craftspeople and try out this thing that everyone can see isn’t going to work without craftspeople.”
Media consultant Doug Shapiro tells Fast Company that AI usage will increase in Hollywood as studios grow more comfortable with the tech. He also suspects the current backlash against using AI is likely temporary.
“There’s this kind of natural backlash that tends to ease over time,” he says. “It’s going to get harder and harder to tell where the effects of humans stopped, and AI starts.”
Generative AI is cropping up most commonly in relatively small-stakes instances during pre- and post-production. “Rather than spend a ton of money on storyboarding and animatics and paying very skilled artists to spend 12 weeks to come up with a concept,” Shapiro adds, “now you can actually walk into the pitch with the concept art in place because you did it overnight.”
Studios have also begun using AI to touch up an actor’s laugh lines or clean up imperfections on their face that might not be caught until after shooting has wrapped. In both cases, viewers might not necessarily even know they’re looking at something that has been altered by an AI model.
David Raskino, co-founder and CTO of AI developer Irreverent Labs, suggests to Will Douglas Heaven at MIT Technology Review that GenAI could be used to generate short scene-setting shots of the type that occur all the time in feature-length movies.
“Most are just a few seconds long, but they can take hours to film,” Raskino says. “Generative video models could soon be used to produce those in-between shots for a fraction of the cost. This could also be done on the fly in later stages of production, without requiring a reshoot.”
AI is putting filmmaking tools in the hands of more people than ever and who can argue that’s not a good thing?
Somme Requiem, for example, is a short film about World War I made by Los Angeles production company Myles. It was generated entirely using Runway’s Gen 2 model then stitched together, color-corrected, and set to music by human video editors.
As Douglas Heaven points out, “Myles picked the period wartime setting to make a point. It didn’t cost anywhere near the $250 million of Apple TV+ series Masters of the Air, nor take anywhere like as long as the four years Peter Jackson took to produce World War I doc They Shall Not Grow from archive video.”
“Most filmmakers can only dream of ever having an opportunity to tell a story in this genre,” Myles’ founder and CEO, Josh Kahn, says to MIT Technology Review. “Independent filmmaking has been kind of dying. I think this will create an incredible resurgence.”
However, he says, he believes “the future of storytelling will be a hybrid workflow,” in which humans make the craft decisions using an array of AI tools to get to the end result faster and cheaper.
Michal Pechoucek, CTO at Gen Digital, agrees. “I think this is where the technology is headed,” he says. “We’ll see many different models, each specifically trained in a certain domain of movie production. These will just be tools used by talented video production teams.”
A big problem with current versions of generative video is the lack of control users have over the output. Producing still images can be hit and miss; producing a few seconds of video is even more risky. Its why humans will need to be involved. But, of course, as you read this OpenAI’s Sora just got better and better.
“Right now, it’s still fun, you get a-ha moments,” says Yishu Miao, CEO of UK-based AI startup Haiper. “But generating video that is exactly what you want is a very hard technical problem. We are some way off generating long, consistent videos from a single prompt.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The radical production efficiencies afforded by generative AI will either collapse the content creation industry or crack it wide open.
April 24, 2024
Posted
April 24, 2024
When It Comes to Creativity, Humans and Machines Can Co-Exist (But It Won’t Always Be Easy)
TL;DR
Adobe’s AI expert Brooke Hopper says AI and art can coexist provided the machines are fed on verified and non-biased data.
There is only so much responsibility a supplier of AI tool is prepared to accept. Consumers need to accept their fair share of responsibility for quizzing whether content is “faked.”
The one thing that differentiates human creativity from AI generated content is our own ability to break the rules. “Machines don’t know when and how to break rules,” she says. Until, of course, they do.
As uncertainty and debate heats up about the impact of generative AI on the creative arts, Adobe wants to defend its own position in developing AI tools integrated in Photoshop and its own GenAI model Firefly while also reinforcing its brand as a champion of the creator.
“I truly believe that to be human is to be creative, that creativity is a core part of who we are, whether or not you consider yourself a creative person,” says Brooke Hopper, principal designer for emerging design for AI/ML at Adobe, speaking to Debbie Millman, designer and educator at The School of Visual Arts and host of the podcast Design Matters.
While AI is in its baby steps phases it is easier to cling onto the idea that creativity is essential to what it means to be human. As AI technology advances however what passes for art, imagination or the lived experience of someone may be indistinguishable from the machine.
“It’s our emotions, our point of view, our life experiences,” says Hopper. “It’s spontaneity, it’s deciding when and where to break the rules. And so I do think that there’s a coexistence of humans and machines [where] humans do what humans are good at and ultimately, the machines are learning from us.”
Adobe can speak with a position of some strength here since it took a decision several years ago to support and build a pathway for tracking how AI has altered images and video content while training its own AI tools on data that it owns or has been cleared for use by third parties.
“We have to give them that data,” she says, while also anthropomorphizing the machines. “They’re not at this point in time making up data on their own. They’re simply taking the data day that we feed them, breaking it down, and then recreating it from noise.”
Hopper acknowledges the issues that come from feeding the machine data that humans have created.
“One thing to remember is these machines rely on information that humans put out into the world. And humans are biased, whether we try to be or not, we are. Therefore the machines are. We need to do things in order to mitigate that bias.”
Adobe advocates training AI on data that is licensed, with verified ownership, and that isn’t copyrighted. It began the Content Authenticity Initiative in 2019 to help avoid some of the deepfake issues that are now surfacing with regularity.
By embedding metadata in the content that’s being created and being able to tag content with “do not train” credentials, it hopes to “actively pursue ways that we can make sure that there are artists protections and that creators are being protected.”
As other Adobe execs have indicated, there is only so much responsibility a supplier of AI tools, is prepared to accept. Consumers need to accept their fair share of responsibility for quizzing whether content is “faked.”
Hopper says, “The general population [should be] educated about how to spot a deep fake, how to know if a website is not secure, because the technology to create deepfakes is getting better. And unfortunately, it’s the same technology that’s helping people create new and different content.”
Hopper backs moves by Congress and other state bodies to enshrine protections from deepfakes in law. “But if nothing is done legally, then morality is always a little bit of a slippery slope.”
The argument from Adobe is that human creativity will never be usurped by AI; that it remains a tool to be used as part of a human led creative process. Hopper is an artist herself, and says she would like to use GenAI tools to help her print 3D designs.
“That’s not to say that I’m going to become a professional 3D artist by any means, but allows me to work in medium and media that I wouldn’t be able to, or would struggle to previously. And that’s what I’m super excited about.”
Generative AI, she insists, is “super useful” within the ideation phase. “Imagine being able to generate even more ideas and more different directions to be able to come to such a better end goal.”
And the one thing that differentiates human from AI generated content, she says, is our own ability to break the rules.
“Machines don’t know when and how to break rules. They follow the rules. So that’s what we lean into. One of the biggest design principles is you have to learn the rules in order to break the rules and breaking the rules is what makes something creative and enjoyable. It’s that serendipitous rule breaking that that feeds into creativity.”
Just now, your basic GenAI tool cannot “think.” It will spew infinite versions, each one different, of an input we give it, based on data we give it. That may well change. But Adobe and Hopper look on the bright side. What else can they do?
“In the next 10 years we’re going to see an explosion of more creativity and content and, I think, more awareness. I’m really excited about the possibilities of more immersive design and experiences,” Hopper says. “Like, what happens when you’re potentially interacting with the artist in [a gallery or museum] piece, or you become part of the piece?”
There’s a conspiracy theory that the web is effectively dead, and made up primarily of bots and generated content. While that may not already be true, the AI companies seem determined to make it a reality.
So says Paris Marx, writer of the Disconnect newsletter and host of the Tech Won’t Save Us podcast in coruscating critique of the death of the open internet at the hands of corporate greed.
In a blog post, he claims that the digital revolution has failed. This is the idea, enshrined by liberal thinkers at the birth of the World Wide Web in the mid-1990s, that the virtual world would be a place for equals “free from the burdens of race, sex, or wealth,” and from the dictates of government or business.
Instead, this utopia has been slowly strangled by its own “platformization.” Marx concedes that the compartmentalization of the web much easier for billions more people to get online but it created huge power and wealth in the hands of a few.
Google and Facebook get it with both barrels. “The greed of those two companies has sent news media spiraling, with lower ad revenue leading to successive layoffs that reduce the quality of the journalism they publish while their websites are stuffed with poor quality ads if not locked behind a paywall altogether,” Marx writes.
Amazon doesn’t escape either and neither do streamers, but compared to where we are now, “we look back on those times as the good old days, before the ambitions of tech companies vastly expanded and the pressure for profit accelerated the degradation of what they’d built,” he recalls.
“Everything must be sacrificed on the altar of tech capitalism,” he says, of which AI is the nadir.
It is corrosive to society, culture, politics and anathema to us as human beings, he suggests.
“The effort to route as many interactions as possible through apps and make our smartphones as addictive as possible has spawned an epidemic of loneliness and even social disconnection.”
AI tools are emphatically not beginning of a vast expansion in human potential, says Marx. “They’re not intelligent or prescient; they’re just churning out synthetic material that aligns with all the connections they’ve made between the training data they pulled from the open web.
Once again, the push to adopt these AI technologies isn’t about making our lives better, it’s about reducing the cost of producing ever more content to keep people engaged, to serve ads against, and keep people subscribed to struggling streaming services. The public doesn’t want the quality of news, entertainment, and human interactions to further decline because of the demands of investors for even greater profits, but that doesn’t matter.”
He calls out the massive (hidden) energy, water, and mineral cost of running all the data centers behind AI tools to show “how little the proliferation of AI tools has to offer us.”
So what’s Marx going to do about it? Well, to paraphrase his 19th century namesake, we need to tear down the machine and build us all a new one.
“Another internet is possible,” he says. “The time for tinkering around the edges has passed. The only hope to be found today is in seeking to tear down the edifice the tech industry has erected and to build new foundations for a different kind of internet that isn’t poisoned by the requirement to produce obscene and ever-increasing profits to fill the overflowing coffers of a narrow segment of the population.”
Alas, as well written as this sermon is, Marx has no manifesto for how to knock this house right down.
He argues that “we don’t have to be locked into the digital dystopia Silicon Valley has created,” but can’t serve up an alternative except with a loose sketch.
As the web declines, Marx says, “we need to consider what a better alternative could look like and the political project it would fit within.”
“Civil War” and “The Creator:” Choose the Camera That Works for Your Story
Choice of camera and lens has always mattered to visual artists but automatically reaching for a high-end digital cine (or 35mm) package is being challenged thanks to the success of recent filmmakers.
Gareth Edwards’ $80 million The Creator and Alex Garland’s $50 million Civil War are the prize examples of big budget IMAX releases shot largely with unconventional prosumer-style cameras.
Although cameras like the Sony FX3 and DJI Ronin 4D are considerably less expensive than high end digital cine gear like a Panavised RED or Sony Venice cost was less a reason for their use on these films than their suitability for the job.
As Garland said about the Ronin 4D integrated camera and gimbal, “It’s a beautiful tool, not right for every movie, but uniquely right for some.”
That’s how filmmaker and photographer Patrick Tomasso thinks filmmakers should now approach all cameras.
“Not every camera is right for every movie,” he argues in a YouTube video. “I’m not suggesting that you go shoot your movie with an iPhone or a GoPro. I’m suggesting that you just find the camera that lets you accomplish the narrative and story that you wish to tell.”
It could be an ARRI Alexa but it could just as easily be a Blackmagic Ursa 12K or a Sony FX3 or maybe even a GoPro.
“It doesn’t really matter as long as you know the one that’s going to have the least amount of barriers between you and the thing that you want to create.”
Tomasso points to the 2013 thriller Blue Ruin shot on the Canon C300 as another example where director and DP Jeremy Saulnier wanted kit for rapid set ups and control over lighting.
Steven Soderbergh is a past master in this field, having experimented by shooting an entire feature on an iPhone (Unsane, 2018). “I think this is the future. Anyone going to see this movie without any idea of the backstory to the production will have no idea this was shot on the phone.”
As The Guardian’s Charles Bramesco pointed out, “It’s the skill of a great artist to turn a limitation into a strength, and indeed, Soderbergh has harnessed the potential of the gizmo in your pocket to create a striking and affecting new visual dialect.”
Soderbergh is at it again with new psychological thriller Presence, which he shot on a Sony DSLR to achieve a fluid point-of-view perspective in keeping with his creepy story.
Provided the camera meets the specs for distribution (theatrical, IMAX, streaming, etc.) then what’s the big deal with using a camera that could be snapped up on eBay rather than rented for a hundred thousand dollars?
These films don’t look as good as they do because of the cameras being used, contends Tomasso. They look good because of the skill of the cinematographer and of the director because they’ve chosen the right tool for the particular story they are telling.
“It’s having an excellent eye. It’s knowing framing and composition. It’s shooting at the right hours of the day on top of great wardrobe and locations and props which is why the movie looks so good.”
He says filmmakers have faced challenges because of the assumption that the best of the best equipment was needed to make good, successful, high end projects when in fact filmmakers like Garland and Soderbergh are willing to experiment and try things that are cheap, small, or DIY.
“It means the industry is getting off its high horse [with] the idea that you have to have an Alexa or a Sony Venice [or other] prestige cameras to make prestige content.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Alex Garland’s “Civil War” is less interested in assigning blame to the right or left than in asking why we might end up there again.
April 25, 2024
Posted
April 22, 2024
The (Subversive) Storytelling Style for Park Chan-wook’s “The Sympathizer”
HBO’s latest miniseries, The Sympathizer, delves into the Vietnam War through a lens rarely seen in U.S. media, presenting a story soaked in the vivid hues of Vietnamese experience and perspective.
Created by Don McKellar and visionary South Korean film auteur Park Chan-wook (Oldboy, Decision to Leave), the eight-episode series adapts Viet Thanh Nguyen’s Pulitzer Prize-winning novel, offering a new perspective on the war’s historical impact.
When Park first encountered Nguyen’s novel, he was struck by its explosively expressive prose, which he described to Laura Zornosa at the Los Angeles Times as like watching fireworks — an experience he aimed to translate into the visual and narrative style of The Sympathizer.
Park, who also directed the first three episodes of the series, is known for blending stark humor with somber themes, a method he believes intensifies the emotional resonance of his films, reflecting the complex spectrum of human experiences.
“Believe it or not, I’m a director who puts significant importance in humor when I go about making a movie, because I believe — when the humor is combined with tragedy or violence — it actually makes it even more powerful,” Park relayed. “And that composite is what enables you to express in totality what a human being is, or what life is.”
Top of Form
The Sympathizer centers on the life of The Captain, portrayed by Hoa Xuande. Born to a Vietnamese mother and a French priest, The Captain embodies Eastern and Western influences, reflecting the cultural and colonial complexities of Vietnam.
Robert Downey Jr. takes on multiple roles for the series, each embodying a different facet of American hegemony and stereotypes. Downey portrays characters like CIA agent Claude, the myopic Hollywood director known as The Auteur, and other symbolic figures that reflect “the melded faces of American imperialism and colonialism,” Zornosa writes.
This casting emphasizes the thematic exploration of identity and perception, showcasing Downey’s versatility, while critically examining the portrayal of historical narratives through a satirical lens.
Park was partially inspired by Stanley Kubrick, who did something similar with Peter Sellers in Dr. Strangelove, another Cold War-era political black comedy. “Our original novel’s intention was to have one and the same body having different faces,” he said.
This notion is pivotal, as the series explores themes of identity, the lasting impacts of colonialism, and the inherent contradictions within cultural understanding. Through The Captain’s eyes, viewers navigate the murky waters of allegiance and identity during and after the Vietnam War — or the American War, as it’s called in Vietnam — offering a multifaceted perspective that challenges conventional narratives.
The Sympathizer extends beyond the confines of the Vietnam War to reflect broader historical and cultural dilemmas. Through its detailed portrayal of characters and conflicts, the series highlights the persistent echoes of imperialism and cultural misunderstanding that resonate in contemporary global conflicts.
“When I was writing the book,” Nguyen notes, “it was always treating the war in Vietnam as an episode in a much longer history of American imperialism and colonialism.”
The Sympathizer is emblematic of a production greenlit ‟late in the era of so-called Peak TV,” Jia Tolentino writes for the New Yorker. She notes that the limited series ‟is the product of a marriage between two eminent tastemakers, A24 and HBO,” booked three years ago (pre-Discovery’s acquisition of the latter and industry contractions).
Tolentino writes that Park’s ‟gift for sumptuous spectacle [is] underpinned by meticulous preparation,” from detailed storyboarding and on-set precision.
Production required 120 days, and time on set with Park is ‟notoriously calm” and often marked by days that wrap early, per production designer Alec Hammond. However, this resulted in an economy of coverage. “There’s not a lot of latitude in the edit, which is not usual for television executives to see,” Don McKellar, Park’s co-showrunner for The Sympathizer, told Tolentino.
In general, Tolentino observes, ‟Making television is a more bureaucratic process than filmmaking, involving more input from more people on more footage.” (Perhaps Park’s infamous precision is one way to combat this tendency?)
For his part, Park ‟recognizes different constraints and opportunities,” Tolentino writes. However, he told her that “you can waste your time ‘like a millionaire wasting their abundant fortune’” when crafting a TV series. Instead, he aimed ‟not to waste ‘one second, one minute, or even a frame.’”
Nonetheless, Tolentino questions whether Park is perfectly suited to the small screen. She writes, ‟Television, though, may never be quite the right medium for a filmmaker who casts a spell that’s not meant to be broken, and who rewards the viewer through destabilization and discomfort.”
“I tried very hard to make his colorful writing into a visual form,” Park told Salon senior critic Melanie McFarland during a Zoom interview, explaining his approach for the adaptation of Nguyen’s 2015 novel.
Additionally, Park ‟takes certain liberties with the story that invite new interpretations and meanings.”
For her part, McFarland wonders ‟whether parts of Nguyen’s story provided [Park with] a means of commenting on the audience’s tendency to revere certain filmmakers and excuse their excesses” (hallmarked by the portrayal of The Auteur); or is the series considering the work of a critic (one of Park’s prior careers)?
Park told her: ‟There’s some part of me that is very interested in that idea, and perhaps it has to do with my background as a critic too. But what is important for me whenever I go about making any kind of work is preserving the right amount of distance with the subject matter.”
He says that he seeks ‟to have the viewer be engaged in the story and alight with the protagonist and his emotional state.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Epic Games, renowned for its blockbuster game Fortnite, is not just a leader in the gaming industry; it is at the forefront of a technological revolution. Through its Unreal Engine, Epic is rapidly redefining the boundaries between digital fantasy and tangible reality, pushing the limits of what digital technology can achieve not just in gaming but in a number of other sectors.
Anna Wiener, writing in The New Yorker, describes how Unreal Engine has expanded far beyond its original purpose, providing foundational technology for various industries.
She illustrates the expansive role of Unreal Engine, which has grown well beyond its initial gaming purpose to become a foundational technology for multiple industries. The game engine has been employed for movies like The Batman and Barbie, architectural visualizations, and even NASA’s lunar simulations now rely on this technology for creating hyper-realistic simulations.
“These little ‘game engines,’ as we called them at the time, are becoming simulation engines for reality,” says Tim Sweeney, CEO of Epic Games, underscoring the transformational impact of Unreal Engine, and its evolution from a tool for rendering 3D virtual graphics for video games to a pivotal technology capable of molding our perception of reality itself.
Furthering this blurring of lines between the digital and the real, Epic Games subsidiary Quixel has embarked on a mission to “scan the world,” digitizing environments and objects to create a vast archive of digital assets. These assets enhance not only games but also films and virtual reality applications, effectively bridging the gap between digital fantasies and real-world textures.
“We have, to a great extent, mastered our ability to digitize the real world,” Quixel co-founder Teddy Bergsman Lind comments, highlighting the remarkable capability of Unreal Engine to replicate the richness and complexity of real-world environments, making them virtually indistinguishable from their physical counterparts.
However, despite these significant technological strides, challenges remain, particularly in the realm of creating realistic human avatars. Epic’s MetaHuman project strives to overcome these hurdles, aiming to perfect the simulation of human nuances.
“Creating high-fidelity digital humans made easy, though getting the movements right, particularly their ability to make eye contact, remains a complex challenge,” says Vladimir Mastilović, vice president of Digital Humans Technology at Epic Games.
Fortnite has evolved into a vibrant platform for cultural exchange, hosting live concerts and pioneering new forms of social interaction. This cultural shift is further bolstered by strategic partnerships, such as the one with Disney, which recently invested $1.5 billion in Epic Games for a 9% stake in the company and has announced plans to create a Disney universe in Fortnite.
While Unreal Engine has ushered in a new era of visual and interactive fidelity, limitations persist, particularly in simulating fluid dynamics and intricate human gestures. Epic Games CTO Kim Libreri admits, “Getting that level of simulation is very, very hard right now.” Even the smallest human gestures can be headaches, he says, underscoring the technical challenges that still lie ahead. “Putting your hand through your hair — that’s an unbelievably complicated problem to solve. We have physics simulation to make it wobble and stuff, but it’s almost at the molecular level.”
Hollywood has eagerly adopted Unreal Engine for virtual production, a technique that allows filmmakers to create and manipulate digital environments in real-time. This innovation is revolutionizing filmmaking, providing unprecedented flexibility and creative control.
As filmmakers increasingly turn to Unreal Engine for creating not just scenes but entire worlds, the line between digital artifice and physical reality becomes blurrier, though not without challenges. Efforts to bridge the gap between digital creation and physical authenticity are ongoing. Oscar-winning VFX supervisor and DNEG principal Paul Franklin observes that virtual production is at its most effective with fantasy worlds that don’t necessarily adhere to reality. “We all have an intuitive understanding of how things move in the real world, and creating that sense of reality is tough.”
Sweeney’s personal commitment to land conservation offers a poignant contrast to his digital endeavors. Owning thousands of acres for conservation, he sees the natural world as both a challenge and an inspiration for digital simulation.
“When you’re standing on a mountaintop, looking out into the distance, you’re seeing the effect of trillions of leaves of trees,” he reflects. “In the aggregate, they don’t behave as ordinary solid objects. When you look at the real world and see all the areas where computer graphics are falling short of the real world, you tend to realize we have a lot of work yet to do.”
He speculates that “an efficient, realistic simulation of a forest would require a ‘geology simulator’ and an ‘ecology simulator,’ each with its own complex sets of rules.”
Does that mean a complete simulation of reality is still far in the distant future? Epic Games, through Unreal Engine and its various initiatives, is not just redefining the realms of video games and digital content creation, but is reshaping how we interact with and understand the very fabric of reality itself. These technologies expand the horizons of the possible, leading us to question what’s possible, and charting new territories in both the digital and natural worlds.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Hollywood is commissioning more video game adaptations than ever, but the video game industry isn’t necessarily flocking to Tinsel Town.
April 26, 2024
Posted
April 19, 2024
It’s Only the Meaning of Life: Wim Wenders and “Perfect Days”
TL;DR
Wim Wenders talks about his Oscar-nominated film “Perfect Days,” which began life as a commission for a promotional documentary about Tokyo’s unique public toilets.
Wenders strove to make a verité drama that is neither quite documentary nor quite fiction but hopefully offers a truthful encapsulation of its subject.
One metaphor Wenders uses is the Japanese concept of “komorebi,” which conveys the idea of shadow and light and by extension an act of ritual for the film’s janitor character.
The slow, meticulous and mundane routine of a lowly toilet cleaner in Tokyo is the subject of German auteur Wim Wenders’ latest film and it has critics waxing lyrical about its transcendental and poetical exploration of life.
Perfect Days was nominated for an Oscar for best international feature, and won Japanese screen icon Kôji Yakusho the Best Actor award at Cannes 2023 for his turn as the lead character Hirayama.
“I see out of the corner of my eyes,” the director told A.frame. “Films very often tell what’s in the center of your vision, and Hirayama, is a person who sees a lot more and pays attention to some of the little things we forget to. He sees the homeless man who lives next to one of these toilets; he respects, greets, and treats him like everybody else. Hirayama is a person who sees everything that gets lost so often in movies.”
Ahead of the 2020 Tokyo Olympics, The Tokyo Toilet Project invited contemporary designers and architects to create 17 public bathrooms in Japan’s capital. Wenders was initially invited to make a series of promotional documentary shorts about the unique facilities, but on seeing them himself he came up with an idea for a fiction feature.
With co-writer Takuma Takasaki, Wenders wrote a script in three weeks. The verité drama was shot in Tokyo over 16 days, with Wenders embracing a shooting style that exists between traditional documentary and narrative filmmaking.
The director added an arthouse touch by shooting in 4:3, a nod to the legendary Japanese director Yasujirō Ozu, but also a practical choice: “It is essential to see the floor,” he says of the toilets.
“Although it was a fictional story, we had a documentary approach. The camera was always on my DOP Franz Lustig’s shoulder — never on a tripod, never on tracks, never on a gimbal or a Steadicam or a crane. This fictional film with the totally fictitious character, Hirayama, was shot like a documentary.”
To make it feel even more authentic, their approach features mostly handheld camera work, and includes a lot of exterior scenes using available light.
The work of Ozu also inspired Wender’s approach, as he describes in a video featurette. “There was this heightened attention [in Ozu’s films] that looked at every object as if it only existed once and only there and he was so attentive to every detail.”
Since each of the toilet buildings are designed by a different architect, “they all have the specific presence,” he said. “And that is also captivating. It’s not as if Hirayama is treating all these toilets as if there were toilets, but he sees them specifically. When you go to these places, you realize the beauty is that they’re all so different, and that each of them forces you to see the world differently. That is a beautiful thing. So, in that way of paying attention to Ozu was ever present.”
Wenders explained to Mascha Deikova at CineD that he wanted to keep the protagonist’s history secret because “this story is not about any possible drama he had in the past. It’s about here and now and the universal humanity of the character,” he said.
“Generally, routine is considered something boring that you do automatically, without really being ‘present.’ To Hirayama, routine means beautiful processes you love doing and that give your life shape.”
With NPR’s Scott Simon the director goes further, describing toilet cleaning as a “mythical job.” He said, “You see him work cleaning toilets, and so you easily reduce his job to being a toilet cleaner. But slowly, you realize the richness of his life, and you realize that cleaning a toilet is a strangely metaphoric job.”
Komorebi as a Visual Metaphor
Another metaphor Wenders uses is the Japanese concept of “komorebi.” This term describes the dancing shadow patterns created by sunlight shining through rustling leaves and swaying branches. Every day, during his simple lunch in the park, Hirayama takes pictures of a particular tree with an old Olympus film camera.
“Black-and-white pictures might seem the same to an inattentive viewer. Yet, of course, they are all unique,” says CineD’s Deikova. “The whole concept behind komorebi is that it can exist only for a moment. So, this original passionate hobby is probably the most suitable visual symbol for the main character’s attitude about life.”
As the director explained in another video featurette, photographing Komorebi was a way of honoring the light and the trees. “[Hirayama’s] light photography became an act of saying ‘thank you’ to the light. And the simpler his life got, the happier he was. He became a monk without knowing it. I don’t think in his own thinking he was living the life of a monk — he was just living the life he started to like.”
Wenders says he doesn’t like to feel manipulated in the cinema and struck upon the ideal way to make the sort of film he like in Perfect Days. He explained his philosophy in a video featurette.
“I hate it when I see a movie and in everything that happens, I see this is constructed. Or I’m reminded of [something] 10 minutes later that will have [story] consequences. I hate it when story becomes so obvious that it’s a construction.”
He even finds this in some of his own films like The End of Violence and The Million Dollar Hotel. “I made movies where the little bit of story ruined the entire flow of credibility for me,” he says, “sometimes tiny elements of stories can become giant of disruption.”
But with Perfect Days he strove to create a documentary of a fiction, finding that one of the ways to do this was to film without blocking or rehearsing.
“We basically gave up rehearsing after a while,” he said. “Our documentary approach is a little risky for actors. A lot of actors don’t feel secure if they cannot rehearse and know exactly what they’re supposed to do.
“In my fictional films, I am always so happy if I manage just in one shot here or there [that captures] something that is utterly real and only happens at the moment. I’m ready for a documentary moment to appear in science fiction. On the other hand, in my documentary films, I’m often trying to bring in a little element of fiction because I think that’s how we live. We live in reaction to things that actually happen around us, and we live as a reaction to things we imagine.”
He continues, “The stories that happen are very different categories than the story we impose, let’s say, on an actress in a movie. So Perfect Days became a wonderful mélange of documentary and fiction.”
His hit documentary feature Buena Vista Social Club, about a troupe of musicians in Cuba, is a case in point. The very fact of his making a film helped the world “rediscover” their music and give them a platform to stage their first ensemble performance together at New York’s Carnegie Hall.
“I thought I was making a music documentary when I was filming a real life miracle. I know how in fiction and documentary things cross each other and you can never really define what it is. And that’s the beauty of it. In Perfect Days, I didn’t have to make that effort. It just happened that the routine of that man and his daily work and his going to work in the morning and driving a car and listening to music and coming home and taking a bath in the public bath and going to bed and reading.
“I always had to make an effort in fiction to make it feel like documentary and in documentaries to make them feel like fiction. But here it suddenly happened just on its own. Maybe we had created the conditions that that enable fiction and documentary to go so well together.”
Yakusho’s performance is largely silent but he is given a soundtrack to his mind. Often, the film finds him driving the streets of Tokyo listening to the Velvet Underground, the Rolling Stones, and Nina Simone — a soundtrack of Wenders’ own favorite music from the ‘70s and ‘80s.
“The songs are very much part of the storytelling process, Wenders tells A.frame, “to the point that we put them into the script when we wrote it. For instance, the lyrics to Nina Simone’s ‘Feeling Good’ were on the first page of the script. They were not even intended to be used in the film.
“For me, they described how I imagined this character, his philosophy, and his way of living. They were my prologue. It was only in the end that I realized they were the best way to end the film.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“Shogun” showrunners Justin Marks and Rachel Kondo discuss their painstaking efforts to re-create feudal Japan for their FX adaptation.
April 19, 2024
Posted
April 19, 2024
M&E Is Investing More in Social Media — and It’s Paying Off
TL;DR
Traditional media has officially stepped into the world of socially integrated media. Social-first content creators are emerging within Hollywood — and it’s working to capture the attention of social audiences and reach new viewers.
The “2024 State of Video” report from research specialist Tubular Labs found that long-form content is on the rise even on platforms like TikTok andInstagram, where M&E publishers are seeing greater engagement.
Broadcast, cable, and radio channels have generally used short-form platforms to drive viewers to long-form content platforms, but in 2024 traditionally short-form video platforms are now incorporating long-form content.
It hasn’t happened overnight, but media publishers have recognized the viewer and consumer power of social audiences — and resources are following. In an overview of media growth on social video, research specialist Tubular Labs finds mainstream media exhibiting “unprecedented growth on social video proving that investing in platforms leads to measurable ROI.”
YouTube has been Media & Entertainment’s bread and butter platform for many years due to its long-form, landscape content format, which is easily adapted from televised content. In 2023, YouTube uploads by M&E channels increased by only 5%, which is consistent with previous years; however, this long-term effort has paid off. With just 5% more uploads in the same time period, views organically grew +118% from 59 billion in January to 124 billion globally in December.
The “2024 State of Video” report also found that from 2022 to 2023, global M&E creators on TikTok posted 57% more videos and won 53% more views, receiving 36% more engagements.
While the US leads countries surveyed with 972 billion views of M&E-related posts on TikTok in 2023, Latin America is a fast-growing region. From 2022 to 2023, Brazil grew viewership by 32% (which is not far from the USA’s 39% year-over-year growth).
Movie and TV-related content on social media scored nearly 700 views on average per post last year, Tubular reports, although the leading growth category was related to Family & Parenting.
Sports viewership increased by 72% last year to a total of 71 billion views. Tubular expect this number to “skyrocket” in 2024 on the back of global sporting events, like the Paris Olympic Games.
Long-form Content IsMaking a Comeback
Since the rise of newer, short-form platforms like TikTok and Instagram, many publishers have refrained from posting long-form videos on those platforms. But in 2024 — that all changes.
“In the past, broadcast, cable, and radio channels used short-form platforms to drive viewers to long-form content platforms. However, in 2024, traditionally short-form video platforms are now incorporating long-form content, making it easier for creators to adapt content for different social platforms.
“Broadcast, cable, radio, and film channels who post long-form content have stuck to the status quo — uploading the majority of their content to YouTube and Facebook, but the latest data reveals a missed opportunity on short-form platforms.”
On TikTok, Tubular data reveals that of top publisher’s videos below the 30-second mark had the most uploads in 2023, but longer videos actually won the highest average views per video.
Tubular’s key takeaway: Experiment with longer videos on traditionally short-form platforms to earn more views and engagements.
Influencers & Commercial Media Align
In 2024, we see Hollywood stars and broadcasters integrating with rising social media influencers. These influencers have been used to fuel movie buzz, conduct red carpet interviews, and pave the way to show media companies how to connect with social audiences.
Rather than having to produce televised, digital, and social content, they can leave the latter to influencers, who arguably do it better.
One example: Recess Therapy is a social media series hosted by an NYC comedian who interviews kids on the playground. The host and two of his famous young interviewees debuted their skills on the red carpet during the 2024 Golden Globes interviewing actors like Margot Robbie and Jennifer Lopez.
The result: Facebook videos about Recess Therapy and #GoldenGlobes won 121% more average engagements per video than Golden Globes content posted by a leading US Entertainment News channel.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Found in Translation: The Cultural Crossover of FX Series “Shogun”
TL;DR
“Shogun” showrunners Justin Marks and Rachel Kondo detail the painstaking process to re-create feudal Japan, including accurate translations of closed captions.
The FX show is a hit, and a large part of this appears down to the attention to detail in world building of 17th Century feudal Japan ensuring that historical and cultural accuracy doesn’t get lost in translation.
Among other things, the showrunners elevated the series’ star, veteran Japanese actor Hiroyuki Sanada, into the dual role of producer to advise on script and scene detail.
“Our strategy was pretty simple: Empower the people who actually know what’s authentic, and then listen to them,” says Rachel Kondo who, with husband Justin Marks, was involved in every aspect of adapting James Clavell’s 1,000-page novel into 10 episodes of TV for FX/Hulu.
“Everyone agreed that the only way to tell the story properly, was to tell (as in the novel) from a full variety of its points of view,” the showrunners explain to Mike DeAngelo at The Playlist podcast. “Which meant that in a story that is in some ways fundamentally about the act of translation, you have to be able to tell it in two different languages. It wouldn’t work any other way.”
Among other things, the duo elevated the series’ star, veteran Japanese actor Hiroyuki Sanada (Avengers: Endgame) into the dual role of producer, which would see him serve as a de facto cultural adviser to the show, assisting with everything from improving the scripts’ Japanese dialogue to casting many of the younger Japanese actors and making sure that traditional costumes were accurate.
The show has proved a critical and commercial hit and a large part of this appears down to the attention to detail in world building of 17th Century feudal Japan ensuring that historical and cultural accuracy doesn’t get lost in translation.
The book has been filmed before as a huge hit miniseries in 1980, a few years after the book itself was published in 1975.
“Generationally, that novel, for us, was the book we all grew up with our parents having on their nightstands,” Marks told The Hollywood Reporter’s Patrick Brzeski. “So many movies and television shows have ripped it off its ‘stranger in a strange land’ archetype. So when FX sent it over, there was a representational side to it that concerned us.”
Marks said he was initially worried that it was a saga that had been seen too many times before, but then he read the book and realized that the story Clavell’s story was still relevant today.
“How do we encounter another culture? How do we encounter ourselves in that process? How do we create a language of curiosity, respect and humility when faced with what we don’t understand?
“All of these things were really impactful to us and now the time was right to adapt it in a way where you put the Japanese language and perspective at the forefront — to get closer to these characters, as opposed to keeping them at arm’s length.”
The 1980 miniseries foregrounded the story of white pirate Blackthorne whereas the original novel revolves around an ensemble of characters. It was this angle coupled with the ‘white savior’ narrative of the first show that Kondo and Marks as well as FX were intrigued to set right in their version. But doing so meant owning up to their own inevitably blinkered cultural perspective.
Marks continues, “All we want to do is find ways to subvert the gaze. We would never be able to properly invert the gaze, because we’re western filmmakers, just like James Clavell was a western writer. What we are hoping to do is to subvert the gaze enough to surprise the audience — like, let’s see Blackthorne as the Japanese see him.”
Another choice was to give greater agency to the story’s female characters, notably Mariko played by Anna Sawai. This was despite the constraints that the period setting forced on them.
Kondo says, “We found that because the female characters’ seem to have constraints — whether gender, or class or their faith system — there was a way to tell the story of how they used their limitations as a form of power. All of their stories deepen in really fascinating ways as they are compelled to translate their limitations into empowerment.”
The process of adapting Shogun to screen with due respect for Japanese culture and historical accuracy led them to create a complex workflow that exchanged ideas and translations back and forth with local experts, actors, co-producers and creatives.
“It often felt like we were building the car while already driving it down the road,” Kondo says of the process that began in the writer’s room and extended into the edit.
“What we wanted to do was to present something that people hadn’t seen before, but that by nature means there is no template,” she told NPR’s Scott Detrow. “And so this process was, I would say, quite chaotic and quite daunting.”
Over the course of translation, from English into Japanese, and from English into period Japanese, “where it almost kind of feels to the Japanese ear maybe more like Shakespeare would feel to the English-speaking ear,” her co-creator added, “there are a thousand nuances that you have to consider.”
Scripts, written in the US by Americans (albeit some were Asian American), were sent to Tokyo for translation. From there, it was sent to a Japanese playwright who specializes in ‘Jidaigeki’ (Japanese period drama, usually set during the Edo period of 1603 to 1868) who put a literary touch to it.
“Then our Japanese producers, Eriko Miyagawa and Hiroyuki Sanada oversaw that moment between taking it from the playwright and giving it to our actors to perform,” Kondo explained to THR. “And they were always discussing the scripts with Justin, asking, “Is this the intention? Is this what we are going for?”
The craziest part was the painstaking process of translating the actor’s Japanese performance back into English subtitles. Details were fought over.
“I’d be like, ‘We should split this sentence here.’ And she’d be saying, ‘No, it flows better if we keep it together,’” Marks relates. “And then, I’d be like, ‘No, we need an em dash.’ And then we’d watch it 100 more times. And then I’d say, ‘OK, yeah, you were right. Let’s just leave it.’ So many obsessive conversations like this.”
They chose to position the subtitles higher up on the screen than you would normally see with closed captioning. This brought them closer to the eye line and was intended to reduce the audience fatigue. They were also fastidious with the color timing behind the subtitles, to make sure that the words were always “jumping off the screen and you don’t get that white-on-white problem that we all remember from watching classic foreign films in black and white during our college days,” Marks explains.
The show’s Japanese actors, notably Sanada, advised on details such as whether costume kimonos were tied appropriately while history advisors on the project ensured that the many scenes of rituals – including seating positions and bowing – were accurate.
There were two sides to this. Kondo and Marks felt they had to enable their Japanese collaborators to speak up “and speak outside of the traditional hierarchy when they noticed something was wrong” (which is not natural to many Japanese) whilst ensuring that the crew on set in Vancouver (home of the show’s soundstages and some exterior locations) were open to hearing suggestions and willing to respond to it.
A historian of Japan marks Shogun’s card. Writing for The Conversation, University of Maryland professor Constantine Nomikos Vaporis says that while the original 1980 series reflects both the confidence of postwar America “and its fascination with its resurgent former enemy,” it was highly unpopular among Japanese viewers.
“FX’s remake demonstrates [that] American viewers today apparently don’t need to be slowly introduced to Japanese culture by a European guide,” and that the series does a far better job at cleaving to Clavell’s narrative in which Blackthorne comes to see Japan as far more civilized than the West.
“Of course, authenticity has its limits,” the historian notes. “The producers of both television series decided to adhere closely to the original novel. In doing so, they’re perhaps unwittingly reproducing certain stereotypes about Japan.
“Most strikingly, there’s the fetishization of death, as several characters have a penchant for violence and sadism, while many others commit ritual suicide, or seppuku.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
What Could/Should Be the Role of AI in the Newsroom?
TL;DR
News organizations are already creating news stories using AI. Doing so should require an understanding of the risks and putting in safeguards against misuse, argues Judy Parnall, head of standards for the BBC.
AI can’t be avoided, Parnall says, but it can be harnessed “for good to improve the process of news gathering and to increase trust in the truth of our reporting.”
News organizations will undoubtedly look after their own interests, she says, but many groups have already found common ground within the C2PA, a content credentials standardization initiative backed by the BBC.
AI is about to impact the news business this year significantly. From how news stories and videos are created to on-camera talent and how consumers get their news, the most significant change in the history of the news business is taking place now.
“People are reacting to AI as the worst thing that has ever happened to humanity or welcoming it as the most wonderful god-send,” says Judy Parnall, head of standards & industry at BBC.
“We know we can’t avoid it. The question is how can we harness it for good to improve the process of news gathering and to increase trust in the truth of our reporting.”
“We need to use these tools with our eyes open, to think about what we need to consider to ensure that the veracity of our brand and the news we output is not called into question.”
Some news organizations are already creating news stories using AI and/or have virtual news anchors reading out stories. Inevitably, more AI will creep into news process and news presentation as broadcasters are faced with the pressures of cutting budgets.
“It’s about understanding the risks and putting in safeguards, and that will change according to what is appropriate for your brand.”
Parnall emphasizes the importance of C2PA, the content credentials standardization initiative in which she personally and the BBC as a whole has played a major role.
“All news organizations will look at the issue in their own way but quite a few groups have come together to share common ground within the C2PA. We have common tools which can share with the industry as needed to help address the threat and the potential of AI.”
The idea of media provenance or data integrity has been gaining ground in the tech community as a way of combatting the rush of AI-generated fakes. It is news media that is particularly vulnerable to this sort of attack (truth being the first casualty of war and also of political elections).
So, in a bid to take the initiative, broadcasters including the BBC, along with Canada’s CBC and the New York Times, joined forces to ensure their own integrity as a trustworthy news source did not fall victim.
“What we’re seeing is a really fundamental shift in the media ecosystem that we need to act on,” says Parnall. “I kind of wish the elections [including UK and the US] were not this year. We’ll be in a better position in 2025 when efforts set in train a few years ago really come to fruition.”
The BBC has stated that it will develop its own “ethical algorithms” to serve as common values and proactively deploy AI on its terms as it transforms itself to increase relevance in an internet-only world, according to director-general Tim Davie in a recent state of the nation speech.
Davie warned against complacency, arguing that social and civilizational fabric was under threat from external forces. He said that “over 70% of countries do not have a free press” and “hostile states…are weaponizing disinformation and using AI for democratic disruption”, while “algorithms monetise controversy, driven, understandably, by the necessity to secure traffic as an overriding commercial objective”.
He said that AI would play a key role in the corporation’s future, for example by enabling the BBC to translate and reformat content, but also noted that the broadcaster had a duty to counter disinformation, particularly by investing in fact-checking.
“We will proactively deploy AI on our terms, always holding on to our published principles,” he said. “Never compromising human creative control, supporting rights holders and sustaining our editorial standards, but proactively launching tools that help us build relevance. We are now working with a number of major tech companies on BBC-specific pilots which we will be deploying the most promising ones in coming months.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
FOX CTO Melody Hildebrandt is spearheading efforts to address the challenges posed by deepfakes and AI-generated misinformation.
April 25, 2024
Posted
April 16, 2024
New Cameras at the 2024 NAB Show: From the Cine to the Mini
TL;DR
Technology innovation is moving at a fast and furious pace, as evidenced by a number of significant new camera releases at NAB Show 2024.
Blackmagic Design introduced the URSA Cine 12K, the company’s new flagship camera system designed for high end production, and the full-frame Blackmagic Pyxis 6K.
In its first outing since it had been acquired by Japanese camera maker Nikon, RED was at NAB Show highlighting broadcast solutions including a SMPTE fiber cine-broadcast module and broadcast color.
ARRI expanded its Alexa brand of high-end imagers into live broadcast setups with the Alexa 35 Live Multicam System.
Germany’s PROTON Camera Innovations launched the PROTON CAM, which the company bills as the world’s smallest broadcast-quality camera.
Imaging technology innovation has continued at a fast and furious pace; there have been a number of significant new camera releases at NAB Show 2024. Here’s a brief rundown:
High-End TV and Cinema
Blackmagic Design jumped into cameras a decade ago with its first Pocket Cinema Camera and has been upping the ante ever since. Its new URSA Cine 12K is the new flagship designed for high end production.
The $14,995 camera features a new sensor that builds on the technology of URSA Mini Pro 12K with larger photo-sites capable of 16 stops of dynamic range. The full sensor area gives customers a 3:2 open gate image enabling post reframing. Alternatively, the larger sensor area can be used to shoot anamorphic and deliver in a range of aspect ratios. Plus, you can shoot in 4K, 8K or 12K using the entire sensor without cropping, retaining the full field of view. There are even 9K Super 35 4-perf, 3-perf and 2-perf modes for full compatibility with classic cinema lenses. An optional Cine EVF is for outdoors and handheld shooting.
There are a variety of ways to work with the media, which at 12K uncompressed will be substantial. For example, the URSA Cine will enable H.264 proxy file creation in addition to the original camera media when recording. “This means the small proxy file can upload to Blackmagic Cloud in seconds so media is available back at the studio in real time,” according to the vendor. “The ability to transfer media directly into the DaVinci Resolve media bin as editors are working is revolutionary and has never before been possible.”
Media can also be streamed direct from the camera (over RTMP and SRT) to major platforms or to clients via Ethernet, WiFi or even connect a 5G phone for mobile data.
“We wanted to build our dream high end camera that had everything we had ever wanted,” said Grant Petty, Blackmagic Design CEO, “Blackmagic URSA Cine is the realization of that dream with a completely new generation of image sensor, a body with industry standard features and connections, and seamless integration into high end workflows. There’s been no expense spared in designing this camera and we think it will truly revolutionize all stages of production from capture to post!”
BMD has also upgraded the PCC into a rugged cube design that targets the market currently occupied by the likes of Red Komodo. The full-frame Blackmagic Pyxis 6K costs $2,995 and comes with multiple mounting points for camera rigs such as cranes, gimbals or drones and is available with either L-Mount, PL or Locking EF lens mounts.
Like the URSA Cine 12K, the Pyxis also generates proxy files for instant upload to the cloud and media availability to editors anywhere working in Resolve. It also includes an optional Cine EVF for outdoors and handheld shooting.
“Since the introduction of the original Pocket Cinema Cameras, our customers have been asking us to make it in a more customizable design,” said Petty. “But we wanted it to be so much more than just a Pocket Cinema Camera in a different body. The Pyxis is a fully professional cinema camera with more connections and seamless integration into post production workflows.”
Live Production/Studio
High end camera makers are turning their attention to broadcast and live events market where there is growing demand for cinematic imagery (for live, highlights and BTS documentaries) and the depth of field capture of digital cine lenses.
The new RED Cine-Broadcast Module integrates the company’s V-RAPTOR camera into live broadcast scenarios such as sports and concerts. It enables two channels of 4K 60P (HDR/SDR) over 12G-SDI and is IP ready with SMPTE ST 2110 (TR-08) and up to a 4K 60P JPEG-XS feed. The module features a hybrid fiber optical cable connector which connects to a rack mountable base station.
Additionally, broadcasters will also be able to shoot with slow-motion, AI/ML augmentation and live to headset using 8K 120FPS R3Ds by using RED Connect, available for a separate license.
Complementing these developments is a new firmware broadcast color pipeline for “live painting” of RED cameras in a broadcast or streaming environment.
ARRI has also expanded its Alexa brand of high-end imagers into live broadcast setups such as outside broadcast concerts, sports, esports to studio based talk shows and game shows.
The Alexa 35 Live Multicam System integrates into existing live production environments providing the full functionality of a system camera while retaining the flexibility of a dockable camera setup, according to a release.
Supporting a current trend in live productions, the Super 35-sized 4K sensor enables shallow depth of field and offers 17 stops of dynamic range for handling extreme lighting situations presented in SDR and HDR.
“As a result, even contrast-y concert lighting is captured faithfully and skin tones are beautiful, so performers always look their best. Low light scenes display minimal noise, and highlights roll off in a natural, film-like way,” ARRI claims.
The full System comprises the new Alexa 35 Live camera with fiber camera adapter and base station), a Skaarhoj remote control panel (though it can work with other RCPs), and accessories such as base and receiver plates, an adjustable monitor yoke, an extra-long camera handle, a tally light with camera ID display, and a rain cover. A new large lens adapter assists rapid setup with box lenses.
One of 87 pre-made looks from a built-in Look Library can be selected and there’s also producer choice of Textures (five multi-cam and eight cine-style) to modify grain and contrast.
“Our emphasis at NAB 2024 will be on the advance to 2160p 4K-UHD as the globally preferred standard for producing high-value television content,” said the company’s Alan Keil. The new UHK-X700 model can be used for pedestal-mounted studio operation, tripod-based sports coverage and shoulder-mounted location production and features three 2/3-inch CMOS UHD sensors with a global shutter to minimize artifacts when shooting LED screens or scenes illuminated with flash or strobe lighting.
The UHL-F4000 is a compact and lightweight UHD HDR camera with low power consumption designed for aerial shots from a helicopter. The camera head sports three UHD CMOS global shutter sensors “capturing natural images completely clear of geometric distortion and flash band effects.”
Germany’s PROTON Camera Innovations launched the PROTON CAM, billed as the world’s smallest broadcast-quality camera.
Measuring just 28mmx28mm and weighing only 24 grams, PROTON CAM is tiny in size, but also incorporates market-leading specifications compared to other comparable cameras. It uses 12-bit sensor technology and advanced FPGA to deliver unmatched high resolution and dynamic range, capturing details with exceptional clarity. It also grants a wide-angle view of up to 120 degrees and better low-light performance, without any image distortion, thus allowing broadcasters significant flexibility and creative scope in its deployment.
“Crucial to the core proposition of PROTON as a company is our ability to maintain 100% ownership over our research and design process, meaning that we guarantee full control over the innovation and quality standards of our product,” said PROTON CEO Marko Hoepken. “Whilst the tiny size of the PROTON is of course a key USP, it was crucial to us that this was not a gimmick that came at the expense of other deliverables. The exceptional image quality and technical specifications embodied within the PROTON are what will set it apart from the market.”
PTZ and Studio Automation
Production budgets and staff are being stretched like never before. In light of the need for more content to support multiplying distribution channels amid the headlines of economic recession, camera robotics comes into play. Innovations in this vibrant product sector range from higher quality sensors to AI face tracking, expanded SDI and NDI support and more.
Sony’s NAB launch PTZ has a 20x optical zoom and AI-driven auto framing primed for sports coverage. The BRC-AM7 is equipped with a 1.0-type image sensor and compatible with 4K 60p as an integrated lens PTZ remote camera. Sony claims it is the smallest and lightest camera of its type in the world. It is possible to record at up to 4K 120p another boost for dynamic sports action.
The $1,999 KY-PZ540 PTZ series from JVC is the company’s first PTZ cameras to incorporate a 40x zoom. The 4K imager also feature JVC’s Variable Scan Mapping technology, which scans the sensor to produce a lossless image transition up to 40x in full-resolution HD. The cameras are intended for large event spaces and instances when the need to zoom in from a distance is essential and they support NDI network connectivity.
“We already have an award-winning PTZ product – so increasing the zoom magnification while keeping the unit affordable made it possible for us to accommodate the needs of a larger segment of customers,” said Joe D’Amico, VP of JVC Professional Video.
Ikegami’s new UHL-43 4K UHD is a compact box-style camera designed for robotic studios, live-event broadcasting and point-of-view image capture. The camera head can be used on practically any support device such as a remote pan and tilt, long-reach arm or overhead mount. An Ethernet interface allows control from practically any distance, making the camera ideal for remotely supervised field production.
Ross Video’s Artimo addresses some common challenges faced with traditional studio camera movement solutions such as pedestals, dollies, and jibs. According to Ross, it offers quiet, fast, programmable moving shots without the limitations of fixed rails, markers, or the need for perfectly smooth studio surfaces. The increasing use of LEDs and the need to have more on-air movement “to engage and entertain the audience” makes a studio robotics solution like this necessary says Ross. It comes with geofencing and LiDAR so it can be programmed for uninterrupted operation, maneuvering around obstacles with precision.
Phones and Drones
Atomos has released the Ninja Phone, a 10-bit video co-processor for smart phones and tablets that lets you record from professional HDMI cameras. The $399 Ninja Phone is designed for iPhone 15 Pro and iPhone 15 Pro Max and the uses the phone’s OLED display and Apple ProRes encoding to create “the world’s most beautiful, portable, and connected professional monitor-recorder.”
It is also the first time Ninja users will have access to an OLED monitor screen, “which, at 446 PPI, is by far the highest resolution, most capable HDR monitor that’s ever been available to them,” added Young.
It is intended for use with many new, smaller format mirrorless cameras such as Fujifilm’s X100 and G series, Canon’s R5 Series, Sony Alpha Series, Nikon Z series cameras and Panasonics GH and S series.
Atomos CEO and Co-Founder Jeromy Young said, “Ninja Phone is for the thousands of content creators who capture, store, and share video from their iPhone 15 Pro but aspire to work with professional cameras, lenses, and microphones. At the same time, the Ninja Phone is a perfect tool for longer-form professionals who want to adopt a cloud workflow without a complex and expensive technology footprint.”
The DJI RS 4 costs $869 and is capable of carrying up to 3kg (6.6lbs) of mirrorless camera and lens combinations for comfortable handling and robust power. A redesigned gimbal horizontal plate enables smoother transitions to vertical shooting.
The DJI RS 4 Pro retails for $1099 and can carry 4.5kg (10lbs). It features an extended battery runtime of up to 29 hours provided the DJI RS BG70 Battery Grip is used. This also incorporates the firm’s proprietary LiDAR Autofocus system to offer cinematographers precise autofocus and enhanced control in dynamic shooting scenarios.
DJI is also making the LiDAR system independent of its own systems in the new DJI Focus Pro Automated Manual Focus (AMF) lens control system.
With a 70-degree focus field of view; 76,800 ranging points and a refresh rate of 30 Hz, the upgraded LiDAR “empowers cinematographers with intuitive spatial understanding by using LiDAR waveform as focus assistance, enhancing their ability to capture scenes with precision.”
According to the company the move marks a “significant leap forward” in providing LiDAR technology, once exclusive to the DJI PRO ecosystem, to more creators.
VR and 360 Cameras
Insta360 unveiled the new X4, a versatile 360-degree action camera featuring 8K video and AI-powered gesture control. The new 360 camera appears to be a dramatic upgrade over its predecessor, offering higher video resolution, a larger touchscreen, longer battery life, and a more rugged build than the X3.
When shooting 360 video, for example, the X4 can capture up to 8K at 30fps, while the X3 tops out at 5.7K video at 30fps. In single-lens mode, the X4 can do 4K video at up to 60fps; the X3 also supports 4K video, but only at 30fps. Other video modes, including Bullet Time, Timelapse, and Me Mode, also received similar boosts to resolution.
the X4’s most notable upgrade might be its new gesture controls, which let users start/stop recording or take a photo using hand signals. That might sound like a niche feature, but for anyone who’s ever suffered the humiliation of shouting voice commands at a non-compliant camera in a public place, it’s a welcome addition.
Additional X4 camera upgrades include a slightly bigger and more durable 2.5” touchscreen, removable lens guards for improved ruggedness, and a major boost to battery life: lab tests showed that the X4 can record 5.7K video at 30 fps for 135 minutes, which is 67% longer than the X3’s recorded time of 81 minutes.
The Insta360 editing suite has options for creators of all levels, from one-tap, zero-effort AI edits, to fully customized manual editing. Reframing in the Insta360 app has two upgraded options. With Quick Edit (formerly known as Snap Wizard), simply move your phone or use the virtual joystick to point the camera, making 360° video editing easier than playing a video game. The reframed clips are immediately saved and ready to share.
For a fully hands-off approach, try AI Edit. Insta360’s algorithm handles the entire reframing process, now faster with improved subject detection. The Insta360 app also has Shot Lab, where creators can find 30+ viral-worthy effects that can be edited in just a few taps.
For creators who prefer a desktop workflow, the recently updated Insta360 Studio is a highly flexible editing tool for seriously clean, ready-to-share edits at maximized resolution, compatible with both 360° footage and regular flat images. Both the mobile app and desktop software are also free to use, no subscription required.
Besides this, Insta360 has just launched Insta360 Reframe, their own plugin for Adobe Premiere Pro. Creators can reframe 360° files shot on X4 directly in Premiere Pro for a smoother workflow with minimal exporting and maximum image quality.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
From Blackmagic, Frame.io, EditShare, Backblaze and more, see all the latest developments in cloud workflows to emerge from NAB Show 2024.
April 24, 2024
Posted
April 16, 2024
“Civil War:” The Camerawork to Capture the Chaos
TL;DR
Writer-director Alex Garland describes dystopian action movie “Civil War” as “a war film in the ‘Apocalypse Now’ mode.”
He sheds light on his unconventional filming style and how he crafted the film’s unique political tone by depoliticizing the reasons behind the American Civil War.
The film’s particular focus is on what happens when journalists are silenced and there’s a loss of shared truth.
Perhaps it could only take an outsider to update the American Civil War of the 1860s and imagine what would happen if similar events tore apart the United States today.
British writer-director Alex Garland didn’t have to look far for inspiration: The January 6, 2021 mob attack on the Capitol was a vivid insurrection filmed live on TV in broad daylight. While these events are a thinly disguised template for the finale of his film Civil War, Garland seems less interested in apportioning blame to the political right or left than in asking why we might end up there again.
You could see similar events play out in Britain or any other country, he told an audience at SXSW after the film’s premiere. “Any country can disintegrate into civil war whether there are guns floating around the country or not,” he suggested, adding that “civil wars have been carried out with machetes and still managed to kill a million people.”
It is as much a road movie as it is a war film, offers an alternate reality about what happens when nobody listens to the other point of view. Its particular focus is on what happens when journalists are silenced and there’s a loss of shared truth.
“I’ve known a lot of war correspondents because I grew up with them,” Garland said in the same on-stage interview. “My dad worked [as a political cartoonist] on the Daily Telegraph. So I was familiar with them.”
Garland showed cast and crew the documentary Under The Wire about war correspondent Maria Colvin, who was murdered in Syria. His lead characters are news and war photographers played by Kirsten Dunst and Cailee Spaeny, whose character’s names echo those of acclaimed photojournalists Don McCullin and Lee Miller. Murray Close, who took the jarringly moving photographs that appear in the film, studied the works of war photographers.
“There are at least two [types of war photographer],” said Garland. “One of them is very serious minded, often incredibly courageous, very, very clear eyed about the role of journalism. Other people who have served like veterans are having to deal with very deep levels of disturbance (with PTSD) and constantly questioning themselves about why they do this. Both [types] are being absorbed and repelled at the same time.”
He represents both types in the film. While it is important to get to the truth — in this case, the money shot of the execution of the US President — he questions if that goal should take priority over everything else they come across in their path. At what point, Garland asks, should the journalist stop being a witness and start being a participant?
“Honestly, it’s a nuanced question, nuanced answer,” he said. “I can’t say what is right or wrong. There’s been an argument for a long time about news footage. If a terrible event happens, how much do you show of dead bodies? Or pieces of bodies? Does that make people refuse to accept the news because they don’t want to access those images? Or worse, does it make them desensitized to those kinds of images? It’s a tricky balance to get right.”
In this particular case, one of the agendas was to make an anti-war movie if possible. He refers to the controversial Leni Riefenstahl directed 1935 film Triumph for the Will, which is essentially Nazi propaganda.
Garland didn’t want to accidentally make Triumph for the World, he said, by making war seem kind of glamorous and fun. “It’s something movies can do quite easily,” he said. “I thought about it very hard and in the end, I thought being unblinking about some of the horrors of war was the correct thing to do. Now, whether I was correct or not, in that, that’s sort of not for me to judge but I thought about it.”
Garland establishes the chaos early, as Dunst’s character covers a mob scene where civilians reduced to refugees in their own country clamor for water. Suddenly, a woman runs in waving an American flag, a backpack full of explosives strapped to her chest.
“Like the coffee-shop explosion in Alfonso Cuarón’s Children of Men, the vérité-style blast puts us on edge — though the wider world might never witness it, were it not for Lee, who picks up her camera and starts documenting the carnage,” reviews Peter Debruge at Variety.
To achieve the visceral tone to the action, Garland decided to shoot largely chronologically as the hero photographers attempt to cross the war lines from California to the White House.
After two weeks of rehearsals to talk through motivations and scenes and characters, Garland and DP Rob Hardy then worked to figure out how they were going to shoot it. He wanted the drama to be orchestrated by the actors, he told SXSW. “The micro dramas, the little beats you’re seeing in the background, are part of how the cast have inhabited the space.”
Spaeny, offers insight into Garland’s “immersive” filming technique: “The way that Alex shot it was really intelligent, because he didn’t do it in a traditional style,” she says. “The cameras were almost invisible to us. It felt immersive and incredibly real. It was chilling.”
A featurette for the movie sheds light on Garland’s unconventional filming style, in which he describes Civil War as “a war film in the Apocalypse Now mode.”
While the A-camera was a Sony VENICE, they made extensive use of the DJI Ronin 4D-6K, which gave the filmmakers a human-eye perception of the action in a way that traditional tracks, dollies and cranes could not. They also bolted eight small cameras to the protagonists’ press van.
Aiming to balance both characters’ impulses while giving the audience a visceral sense of the danger, Hardy needed camera systems that were as flexible as possible, he recounted to IndieWire’s Sarah Shachat.
He found that having six Ronin 4Ds (one became a casualty of the shoot) allowed the camera team to get as close as possible to the perspective of the journalist characters in action sequences without needing to truncate or interrupt the action, Shachat notes.
“Since Ex Machina, we’ve very much set the precedent that would create these immersive environments and the cameras almost become secondary; the actors and everybody can walk into that environment and make it feel as authentic as possible. Which sounds like, well, wouldn’t that be a standard thing to do in all aspects of filmmaking? But surprisingly, it’s not,” Hardy said.
The smaller Ronin cameras allowed the DP and his team to switch between handheld and Steadicam work, as well as more composed shots, employing the visual language of both road trip and combat films.
“I could sit back on wheels if I needed to and have another operator in amongst the action and see things from a distance a bit more globally and make decisions about framing,” Hardy said.
“We were always working towards the idea that every single shot could be a still image if you went through each and every frame and picked that singular moment, and so the framing needed to be very important.”
For the film’s harrowing crescendo, a 15-minute siege on Washington, DC, Hardy employed a highly kinetic, verité approach. Emphasizing the journalists’ perspective as caught in the middle of the attack, it is a high-decibel, chaotic sequence, showcasing an array of practical effects, including speeding Humvees, bulldozing tanks, and nonstop gunfire.
“We were, to use a technical term, blowing shit up,” the DP tells Jake Kring-Schreifels at The Ringer. “It had to be that way. Everything was about creating this authentic environment.”
During pre-production, Garland and Hardy worked closely with production designer Caty Maxey and other department heads to map out locations and choreography, as well as which shots would be built by a VFX team. They then built a 20-foot and three-dimensional scale model of the city, labeling where and how each section of Washington, DC, would be filmed.
“The aerial bombing of the Lincoln Memorial, for example, would contain real second-unit shots of the city that visual effects teams would overlay with fiery destruction. But as the sequence swooped down to ground level, production would move to Stone Mountain Park, where Maxey was responsible for furnishing the foreground of a massive assault,” Kring-Schreifels details.
One major benefit of the scale model was that it could be used by every creative department to pinpoint the locations of tactical explosions, as well as the physical dimensions of the space and a general flow of the action.
“We would have many, many meetings like, ‘OK, so this vehicle comes in here, they’re gonna approach here, but these guys are going to stop them, so then this is going to look like that,’” Hardy recalls.
“I remember walking onto that set, and by the time we got to it, it honestly did feel like the eve of the final battle,” Hardy says. “Everybody knew what they were going to do.”
To Matthew Jacobs at Time Magazine, Spaeny likened the road scenes to a play, adding, “unlike theater, or even a typical movie shoot, Civil War changed locations every few days as the characters’ trek progressed, introducing constant logistical puzzles for the producers and craftspeople to solve.”
Dunst’s husband Jesse Plemons makes a brief appearance in the film, but commands the scene as a menacingly inscrutable soldier carrying a rifle and wearing a distinct pair of red sunglasses.
“I can imagine that people might read into or some kind of strange bit of coding into Jesse Plemons’s red glasses,” Garland says in A24’s notes. “Actually, that was just Jesse saying, I sort of think this guy should wear shades or glasses. And he went out himself and he bought six pairs, and we sat there as he tried them on, and when he got to the red ones, it just felt like, yep, them.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
AI is rapidly transforming the media landscape, but are we ready for what’s coming next? A new study from Futuri, in partnership with CMG Custom Research, examines the role of AI in Media. The study, involving feedback from more than 5,200 US radio and TV news consumers, offers an unprecedented look into how audiences perceive and interact with AI-driven content.
Futuri CEO Daniel Anstandig delivered the results of this novel study on the Main Stage at NAB Show 2024 in a presentation that also featured Ameca, an autonomously AI-powered humanoid robot. He emphasized the dual role of AI as both a creator and a facilitator in media, “AI is inevitable, and wise adoption and integration by the media industry will be key in the industry’s resilience and growth in a time of unprecedented competition.”
The study highlights a surprising readiness among audiences to embrace AI within media platforms. Approximately one out of every five people surveyed believes they already listen to AI-powered radio stations, a perception that, while not accurate, indicates a significant openness to such technology. Nearly half of the TV viewers surveyed believe they have seen AI-presented news, suggesting a growing comfort with AI in familiar settings.
One of the most striking findings of the study is the difficulty audiences have in distinguishing between AI-generated voices and human ones. In tests involving AI-generated and human voices, 60% of the time, participants identified the AI voices as human. AI avatars used for presenting news also showed potential, although the technology was deemed not yet ready for mainstream broadcast, indicating a nearer-term fit for social and digital media.
Shifting news consumption habits also emerged from the study, with social media now leading as the primary news source, a trend that underscores the evolving ways in which audiences access information. This shift presents a crucial opportunity for broadcasters to leverage AI to enhance content delivery across various platforms, thus reaching a broader audience more effectively.
The demand for personalization in media consumption was another key insight. Approximately 45% of respondents expressed interest in customizing the personality of AI-driven radio or podcast hosts, and 41% showed interest in tailoring the type of content delivered, highlighting a strong consumer desire for customized media experiences.
Transparency remains a vital concern, with 90% of respondents indicating a preference for clear disclosure when AI is used in content creation. This finding emphasizes the need for transparency as media outlets increasingly integrate AI technologies.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Two decades of digital camera technology and software packages have enabled greater access for more people to the possibility of telling stories on film. The contention now is that such prosumer, even consumer, gear is of such high quality that even A-list filmmakers are using it.
The latest talking point is the extensive use of the $5,000-$6,000 DJI Ronin 4D, an integrated camera, lens and four-axis stabilized gimbal, to shoot $50 million sci-fi action thriller Civil War.
The inexpensive price of the camera was not the reason director Alex Garland wanted to use it. As he explained to Ben Travis at Empire Online, “It self-stabilizes, to a level that you control — from silky-smooth to verité shaky-cam. To me, that is revolutionary in the same way that Steadicam was once revolutionary. It’s a beautiful tool. Not right for every movie, but uniquely right for some.”
It enabled DP Rob Hardy to shoot and move the camera quickly without using dollies or tracks and yet without it feeling too handheld.
Instead, the DJI Ronin 4D offered a distinctly human perspective. It was, notes Garland, “the final part of the filmmaking puzzle — because the small size and self-stabilization means that the camera behaves weirdly like the human head. It sees ‘like’ us. That gave Rob and I the ability to capture action, combat, and drama in a way that, when needed, gave an extra quality of being there.”
Gareth Edwards’ $80 million budget sci-fi feature The Creator was shot on the $4,000 Sony FX3 by Oren Soffer, guided by Dune’s Oscar winning cinematographer Greig Fraser, for reasons of compactness and low light capabilities
While neither camera is certified by IMAX as an IMAX camera, both The Creator and Civil War were presented for IMAX screens because they used IMAX post-production tools and a sound design suitable for the giant format. Neither film might look quite as good as Dune Part Two — which was shot on IMAX certified ARRI Alexas — but the quality is as good as there.
And this is the contention of Jake Ratcliffe, technical marketing manager at camera rental house CVP. The gap in image quality between low and high-end cameras is closing, he argues, and the compromises you would previously have had to make with cheaper cameras are diminishing.
With image quality less of a differentiating factor, filmmakers have more and more choice over the tool for the job. RED originally designed the smaller and relatively cheap Komodo as a crash camera, but its lightweight, small form factor and image quality that matches bigger brother V-Raptor cameras has seen it increasingly used on shows like Amazon Prime’s Road House.
Ratcliffe thinks these stories are showing that the process of filmmaking is changing. “The democratization of filmmaking equipment is going to allow more and more people to tell the story in a more engaging way than what would have been possible in the past. I think the industry will go a step further in this regard with Unreal Engine in the future too.”
Has camera technology using glass optics and digital sensors reached its natural peak?
There's some interesting cameras being used for a couple upcoming Hollywood films.
Steven Soderbergh is using the SONY a9iii global shutter mirrorless for 'PRESENCE'
The film is shot like a POV entirely on a 14mm SONY photo lens with a gimbal. The a9iii is the first full frame… pic.twitter.com/NdTzOP6fhV
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Here’s how director Steven Soderbergh shot the supernatural “Presence” entirely from the ghost’s perspective.
April 15, 2024
How the Blockchain Can Open Access to Filmmakers
TL;DR
Leo Matchett, co-founder and CEO of Decentralized Pictures, says the blockchain has the potential to revolutionize the entertainment industry.
Decentralized Pictures is a blockchain-based platform where filmmakers can submit movie pitches and pay a submission fee in the project’s native token, FILMCredits. It was launched in 2021 by Matchet, American Zoetrope’s Michael Musante and Roman Coppola.
Could blockchain technology address current challenges by opening access to independent filmmakers and sharing reward more equitably?
“The blockchain has the potential to revolutionize the entertainment industry,” says Leo Matchett, co-founder and CEO of Decentralized Pictures. “It can help create a more transparent and equitable industry, and it can also connect fans with their favorite artists in new and innovative ways.”
Decentralized Pictures, a blockchain-based platform where filmmakers can submit movie pitches and pay a submission fee in the project’s native token, FILMCredits, was launched in 2021 by Matchet, Michael Musante, VP of production and acquisitions at American Zoetrope, and Roman Coppola, and with $50,000 in documentary funding from The Gotham Film & Media Institute. Since then, it’s been helping to transform the future of independent cinema.
“We are taking a novel approach to providing opportunities to filmmakers, and we’re relying on audiences to tell us which content and which filmmakers are most deserving,” explains Matchett of their Web3 finance platform.
Rather than taking a retrospective data driven approach that studios use to base decisions on which projects to greenlight, DCP polls its communities of members in real time about which of projects submitted to it should receive further finance or support. Members, or “reviewers,” give each project a ranking score in value sets around metrics like script, characters or plot lines and social impact.
“This is done on blockchain as a peer-to-peer payment between the submitter and the reviewer,” Matchett says. “So it is literally the opposite of Kickstarter. A person who is looking for financing pays a fee into a smart contract that dynamically pays reviews. They are peer-to-peer paying for peer review and the data derived from those payments and the incentivized behavior of review is what indicates the most deserving filmmaker to move forward with. All the data is preserved, auditable and mutable on the chain making it a fair and transparent way of deciding who should get opportunities to tell stories.”
He says the current system is limiting opportunities for filmmakers. “It’s a lot about who you know and who you get in the room with. It’s about access to money and your proximity to the industry or your past experience. However, Decentralized Pictures is taking unsolicited material, which we outsource to the world to help us with the decision making. It is audience-driven content in that the audience determines, which content they should be presented with.”
DCP’s approach is paying off. They have already produced and financed several documentaries and a pair of narrative features, including Calladita, a satire directed by Miguel Faus, which utilized NFTs (digital collectibles) to finance production and increase audience participation.
The thriller Cold Wallet directed by Cutter Hodierne premiered at SXSW 2024 and was judged by director Steven Soderbergh to be a “smart, spiky, off-center take on the vigilante genre.”
Holy Smokes at NAB Show
Its most recent project is the short comedy film Holy Smokes created by Fiszman and Ares and starring Suvari, all of whom shared their involvement in the project at NAB Show following a special screening of the film.
“Filmmaker Kevin Smith read the top scripts that were submitted to us and curated by the community, and with his producer [Liz Destro] they selected Holy Smokes, which was the number one rated script.”
At a time when studios are concentrating on very high-budget, franchise-led, merchandisable tentpoles “built on known IP,” they are locking out indie filmmakers, says Matchett.
“What has also contributed to the decline of indie cinema is that streamers have significant power in the industry,” he says. “With that comes no real need to make offers to purchase content that has any reflection of what the budget was. You can spend $10 million making a film and get a million dollars from a streamer for an exclusive two year deal but you are stuck with a tough decision. Your film could get shelved and never seen be anyone or if you don’t take their money you risk a net loss. It’s a difficult distribution environment right now.”
He continues: “There used to be secondary release window with DVD and home video sales but that has pretty much been destroyed by streaming. Part of the reason there are larger budget tentpole film is to mitigate the risk of needing to have a significant portion of the revenue come through box office. You don’t have the secondary punch of revenue that you used to get. Instead, you are forced into a deal with a streamer.”
By highlighting their experience on the DCP platform, the panelists shed light on the transformative shift towards greater democracy within the film and TV industry.
“As the film industry continues to evolve,” Matchett says, “discussions like these pave the way for a more accessible and inclusive future where the voices of audiences hold considerable sway in determining the success and recognition of films and filmmakers.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
This year, the creator economy will generate a quarter of a trillion dollars in revenue, per Goldman Sachs. By the end of this decade, the creator economy will produce a half a trillion dollars in revenue. Industry commentator Evan Shapiro believes that the future of the media industry is being driven by the creator economy and that there will be far more opportunities to make money there than in the traditional gatekeeper ecosystem.
That’s because of the industry-changing opportunity to go directly to fans and build communities of people who will pay for what an artist or creator produces.
Shapiro earns a living entirely from monetizing his work with online newsletters published on platforms like Substack, and counts himself as a creator.
He says the rest of the media industry needs to stop thinking about creators as influencers purely engaged in marketing and that the Creator Economy is driven by clicks on social media alone.
Taylor Swift and her self-produced, self-distributed Eras Tour movie, which made more than $250 million at the theatrical box office, is the poster woman for the creator economy.
Sure, these multi-millionaires represent less than one percent of creators working in the creator economy but Shapiro points to the hundreds of thousands “making middle class livings.”
The secret, Shapiro says, is that “creator-led enterprises do not need to be massive enterprises, with tens of millions of followers to make money. Creators from an array of disciplines are able to build their own small businesses based not on impressions but cemented by their ability to monetize the love of their work from their small, passionate communities.”
YouTube series Snake Discovery, for example, is produced by the owners of a pet shop in Minneapolis. It will generate more than $125,000 from its 3,500 Patreon members alone, on top of merch (which they sell a lot of) and various other sources of income.
Shapiro argues that most creator economy gigs are not dependent on billions of clicks or tens of millions of followers. Most of the best opportunities will come from businesses like Snake Discovery, that build small but mighty, highly engaged, super passionate and loyal communities around their work.
This leads Shapiro to state, “We are at the precipice of an explosion of consumption and spending in what I call the Community Economy — a segment of media focused not on pure reach, but rather built entirely around monetizing the passions of specific audiences.”
The economic power of the media universe is shifting to the creator-led “Community Economy,” he suggests. Anyone in media who wants to make money (so, everyone…) — “even those who do not run their own creator-led businesses,” says Shapiro, “will need to learn how to ply their trades in the creator economy. Those who do will find opportunities as great or greater than those of the past gatekeeper Media era. Those who do not, will not.”
Is the industry, as Shapiro suggests, on the verge of an explosion in consumption and spending in the creator-led community economy?
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Casey Neistat is most famous as a YouTuber, but that wasn’t his goal… his career “wasn’t an option” when he started creating videos.
April 13, 2024
Posted
April 13, 2024
NAB Show Amplified: Sean Evans Has a Spicy IP Recipe
Sean Evans, co-creator and host of internet talk show “Hot Ones,” will divulge how he’s Heating Up the Zeitgeist on the Main Stage of the 2024 NAB Show.
Evans will share insights into First We Feast‘s IP and creative choices, his interview philosophy, and more at 11 a.m. (PT) on Tuesday, April 16 during a conversation with NAB VP of Content Design & Development Josh Miely. The session is open to all attendees; you can register to attend for free with code AMP05.
Here, he answers questions from NAB Amplify’s Emily M. Reigart about how “Hot Ones” fits into the lineup of modern talk shows and what it takes to make everything sizzle on camera.
Eater calls “Hot Ones” “a talk show for the 21st century” and also references its origins in your “stunt journalism” phase. How do you think about the show’s place in the Western canon? I think “Hot Ones” is the internet’s version of the classic late night talk show that I grew up watching. We have a unique format with spicy wings, but the DNA of the show is rooted in broadcast tradition. “Hot Ones” straddles the line between the familiar and the novel; it’s both mainstream and esoteric at the same time.
I think Ricky Gervais described it best when he called “Hot Ones” “a mix between ‘Charlie Rose’ and ‘Jackass.'”
I think of “Hot Ones” as a shooting star in the constellation of pop culture.
While you clearly don’t mind making your guests physically uncomfortable, you have said in the past that the secret to a good interview is finding what guests want to talk about. Do you think that approach is similar to how Howard Stern and David Letterman would conduct these interviews? I’ve heard they were part of the inspiration for the show. The people who are remembered for truly mastering the art of the celebrity interview are those who understand that the audience needs to see a great show. That’s show business. Every talk show host of note understands that, so in that way we all have the same North Star.
I’m of the opinion that an interview is dependent on the generosity of your guest, but I wouldn’t say that’s necessarily true of the living legends you mention in your question. We all have unique styles, but the direction we’re pulling in is all the same.
Tell me about the kit and crew required to make the show. How many cameras, mics, etc? We use five cameras (Sony FS7s, I believe), which includes a wide shot, and a pair of cameras on me and the guest. For sound, we wire and use boom mics on both sides of the table. There’s a lighting grid as well, but otherwise, just two trays of wings and, of course, the hot sauce lineups.
How many people are involved in making an episode? On a shoot day, there’s usually about nine to 10 people on set between the camera/sound crew, Dom and Victoria producing, and myself. But, it takes a village, from booking the show, editing it, selling it, etc. The brand has gotten so big; there are many more hands involved now than when we started.
“Hot Ones” is now one of six shows First We Feast makes. What qualities make a show a good fit for the brand? First We Feast is a brand at the intersection of food and pop culture. The best way I like to describe our ethos is “dumb stuff for smart people,” and then there has to be some sort of food angle. Naturally.
And if you could switch to a different FWF show, which would you choose? Back in the day, we had a host named Mikey Chen who did a whole show about ice cream. After eating all these hot wings over the years, I think an ice cream show would be a nice pivot for me.
You’ve now made 23 seasons of “Hot Ones,” eating more than 3,000 chicken/vegan chicken wings in the process. How has the show evolved since 2015, and how have you changed as a host/interviewer? We definitely take the interview a lot more seriously now than when we first started, but otherwise the show has mostly stayed the same. From the format to the set, and even the bald guy hosting it, we’re nothing if not consistent. The fundamentals of making the show have gone mostly unchanged for the nine years we’ve been churning out episodes.
Casey Neistat, digital creator and cofounder of BeMe, is joining NAB Show this year for “Do What You Can’t with Casey Neistat.” You can register to attend for free with […]
April 11, 2024
Posted
April 11, 2024
Tyler Chou: Learn the Legal Truths Behind Viral Content and How To Protect Yourself
TL;DR
Tyler Chou, founder and CEO of Tyler Chou Law For Creators, helps YouTube creators navigate minefields like cease-and-desist orders and copyright claims.
Chou’s YouTube channel, “Tyler Chou The Creators’ Attorney,” attracted 31,000 subscriptions in its first year. She helps creators build out their businesses, using YouTube as their creativity incubator and as the marketing arm of their business.
The session will explore the legal traps that many TikTok, YouTube, LinkedIn and other creators unwittingly fall into, as well ashot-button issues like copyright infringement, fair use controversies, the ever-changing FTC disclosure requirements, and the hidden dangers in brand deals.Register here to attend with code AMP05.
What do you do if you get a cease-and-desist letter, a copyright claim from a rights holder (either directly or from YouTube), or a copyright strike from YouTube?
Some of the scariest parts of being a creator is receiving a C&D letter or copyright claim/strike. What do you do?
“Well, first of all, do not panic. Sometimes letters are sent and they might not be the rightful rights holder or maybe the violation or infringement has been corrected,” says Tyler Chou, founder and CEO of Tyler Chou Law For Creators, a law firm whose mission is to protect and support big creators on YouTube.
As for the copyright claim or strike, because each situation is different it depends on the facts, but bring your facts to Chou and she can guide you.
“Legal Landmines on Social Media: What Every Creator Needs to Know NOW!” is presented by Chou as part of the all-new Creator Lab at this year’s NAB Show. She will explore the legal traps that many TikTok, YouTube, LinkedIn and other creators unwittingly fall into. Moderated by Creator Lab host Jim Louderback, editor and publisher of Inside the Creator Economy, the session takes place on Monday, April 15 from 11:40-11:50 AM in the Creator Lab Theater (SU4154).
Learn about protecting IP, understanding contracts, and navigating the legal landscape of digital content creation. Also discover the shocking legal truths behind viral content and learn how to protect yourself, your creative output and your company from unscrupulous brands, agents, platforms and “partners.”
In 2022, Chou started her YouTube channel, Tyler Chou The Creators’ Attorney, and quickly grew to 20,000 subs within three months and upwards of 31,000 in her first year. She helps creators build out their businesses, using YouTube as their creativity incubator and as the marketing arm of their business.
“I am more than an attorney — I am my clients’ confidante, coach and biggest cheerleader,” she says.
Previously, Chou spent 15 years in Hollywood as an attorney for Disney, Skydance and BuzzFeed, as well as at a large law firm where she was on the talent team of big names like Tom Hanks, Marisa Tomei and Robert Rodriguez.
“After protecting studios and companies for 15 years, I did not feel like I was doing meaningful work… that changed anyone’s life, directly,” she explains. “I was tired of protecting the big studios and production companies who took advantage of creatives. And, I wasn’t directly representing creatives, the real reason why I went into entertainment law.
“So I walked away, even though it’s scary and hard building my own law firm, each day I wake up excited and it finally… feels right!”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Chou says what she does now — protecting creators — finally aligns with her values.
“Creators show us, courageously, that we do not have to be handcuffed to a soul-sucking nine to five corporate job because someone, or society told us to as children. We can walk away from a job, quit and start a YouTube channel and share our story,” she says
“And creators create based on their own stories, with their phone or camera without a studio, streamer or network to tell them whether or not their stories are worthy of being picked out of development hell and will be produced against great odds.
“Creators are taking back the power and they create without anyone telling them no.”
However, that doesn’t automatically mean creators can escape from the clutches of big business. And, as any human and as any business big or small, they can make mistakes.
Chou has made it her mission to be their guardian angel.
“Creators are the soul of humanity. Who must be protected at all costs. And I am honored to pick up the sword and shield to protect all creators.”
The rapid evolution of production technologies is making advanced studio tools and techniques accessible to creators at all levels.
April 11, 2024
Posted
April 11, 2024
Decentralized Pictures: Rethinking the Film Business Model, From Script to Screen
TL;DR
Leo Matchett, co-founder and CEO of Decentralized Pictures, says the blockchain has the potential to revolutionize the entertainment industry.
Decentralized Pictures is a blockchain-based platform where filmmakers can submit movie pitches and pay a submission fee in the project’s native token, FILMCredits. It was launched in 2021 by Matchet, American Zoetrope’s Michael Musante and Roman Coppola.
The writers’ and actors’ strikes last year offered clear evidence that the current operating model in Hollywood is not sustainable. Could blockchain technology address the challenges by opening access to independent filmmakers and sharing reward more equitably?
“The blockchain has the potential to revolutionize the entertainment industry,” says Leo Matchett, co-founder and CEO of Decentralized Pictures. “It can help create a more transparent and equitable industry, and it can also connect fans with their favorite artists in new and innovative ways.”
Decentralized Pictures, a blockchain-based platform where filmmakers can submit movie pitches and pay a submission fee in the project’s native token, FILMCredits, was launched in 2021 by Matchet, Michael Musante, VP of production and acquisitions at American Zoetrope, and Roman Coppola, and with $50,000 in documentary funding from The Gotham Film & Media Institute. Since then, it’s been helping to transform the future of independent cinema.
“We are taking a novel approach to providing opportunities to filmmakers, and we’re relying on audiences to tell us which content and which filmmakers are most deserving,” explains Matchett of their Web3 finance platform.
Rather than taking a retrospective data driven approach that studios use to base decisions on which projects to greenlight, DCP polls its communities of members in real time about which of projects submitted to it should receive further finance or support. Members, or “reviewers,” give each project a ranking score in value sets around metrics like script, characters or plot lines and social impact.
“This is done on blockchain as a peer to peer payment between the submitter and the reviewer,” Matchett says. “So it is literally the opposite of Kickstarter. A person who is looking for financing pays a fee into a smart contract that dynamically pays reviews. They are peer-to-peer paying for peer review and the data derived from those payments and the incentivized behavior of review is what indicates the most deserving filmmaker to move forward with. All the data is preserved, auditable and mutable on the chain making it a fair and transparent way of deciding who should get opportunities to tell stories.”
He says the current system is limiting opportunities for filmmakers. “It’s a lot about who you know and who you get in the room with. It’s about access to money and your proximity to the industry or your past experience. However, Decentralized Pictures are taking unsolicited material which we outsource to the world to help us with the decision making. It is audience driven content in which the audience determines, which content they should be presented with.”
DCP’s approach is paying off. They have already produced and financed several documentaries and a pair of narrative features, including Calladita, a satire directed by Miguel Faus, which utilized NFTs (digital collectibles) to finance production and increase audience participation.
The thriller Cold Wallet directed by Cutter Hodierne premiered at SXSW 2024 and was judged by director Steven Soderbergh to be a “smart, spiky, off-center take on the vigilante genre.”
Holy Smokes at NAB Show
Its most recent project is the short comedy film Holy Smokes created by Fiszman and Ares and starring Suvari, all of whom will share their involvement in the project at NAB Show following a special screening of the film.
“Filmmaker Kevin Smith read the top scripts that were submitted to us and curated by the community, and with his producer [Liz Destro] they selected Holy Smokes, which was the number one rated script.”
At a time when studios are concentrating on very high-budget, franchise-led, merchandisable tentpoles “built on known IP,” they are locking out indie filmmakers, says Matchett.
“What has also contributed to the decline of indie cinema is that streamers have significant power in the industry,” he says. “With that comes no real need to make offers to purchase content that has any reflection of what the budget was. You can spend $10 million making a film and get a million dollars from a streamer for an exclusive two year deal but you are stuck with a tough decision. Your film could get shelved and never seen be anyone or if you don’t take their money you risk a net loss. It’s a difficult distribution environment right now.”
He continues: “There used to be secondary release window with DVD and home video sales but that has pretty much been destroyed by streaming. Part of the reason there are larger budget tentpole film is to mitigate the risk of needing to have a significant portion of the revenue come through box office. You don’t have the secondary punch of revenue that you used to get. Instead, you are forced into a deal with a streamer.”
By highlighting their experience on the DCP platform, the panelists will shed light on the transformative shift towards greater democracy within the film and TV industry.
“As the film industry continues to evolve,” Matchett says, “discussions like these pave the way for a more accessible and inclusive future where the voices of audiences hold considerable sway in determining the success and recognition of films and filmmakers.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Steve Raizes, executive vice president of podcasting and audio at Paramount Global, says the death of podcasting has been greatly exaggerated.
The podcasting industry has been on a rollercoaster,Raizes acknowledges, but in the end it’s an industry still on the rise.
“Creatively, podcasting remains incredibly robust,” he says, but the industry needs to push for better discovery and, simultaneously, continue to both engage and educate the advertising community of the value of podcast advertising in terms of both top and bottom of the funnel.
Steve Raizes, executive vice president of podcasting and audio at Paramount Global, says the death of podcasting has been greatly exaggerated.
The headlines from last year as big tech scaled back don’t show the whole picture. The podcasting industry has been on a rollercoaster, but in the end it’s an industry still on the rise. From big Hollywood studios still leaning on audio as a place to grow their fans and IP, to established platforms evolving what podcasting can mean in the future, there are companies seeing success and getting back to basics on what it means to be an on-demand digital content creator in the audio space.
At Paramount Global, Raizes and his team produce award-winning podcasts such as The Daily Show: Ears Edition, MTV’s Official Challenge Podcast, On Fire with Jeff Probst, 48 Hours, and more.
“We had an incredibly frothy period and the market is now stabilizing (which has unfortunately had real impacts on people and businesses),” Raizes says of the buzz circulating through the podcasting community about the industry’s untimely demise. “However, the overall trendlines remain extremely positive. We’re now beginning to see an adjustment — including deals based on known metrics along with an investment in shows that have a proven business model behind them.”
There are , however, steps that can be taken to overcome this inaccurate perception. “On the optics front we need to continue to plant the flag for what makes this industry so great,” he urges.
“First and foremost — creatively, podcasting remains incredibly robust. And with that, we continue to pull in new audiences who care deeply about the medium. We need to push for better discovery to help aid those pods and, simultaneously, continue to both engage and educate the advertising community of the value of podcast advertising in terms of both top and bottom of the funnel.”
Podcasting has become a staple in our digital diet, thanks to platforms that make it easier than ever for anyone to broadcast their ideas. This boom has brought a wealth of stories, insights, and discoveries to listeners worldwide, proving that the medium is far from fading into the background. Yet, this ease of access leads some to oversimplify what it takes to create a podcast that truly resonates. It’s not just about having the right equipment or a novel idea.
Raizes puts it succinctly: “There is no ‘make podcast’ button that you can press to magically produce premium audio content.” He emphasizes that behind every podcast that captures and keeps our attention, there’s a blend of dedication, creativity, and a unique perspective. “Every successful pod takes time, patience, artistry, and a strongly differentiated POV.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Jackie Levine, head of Television and Film at Audible shares her thoughts on important trends in audio entertainment.
April 19, 2024
Posted
April 11, 2024
The Human-Machine-AI Symphony of Storytelling
The 2024 NAB Show will serve as a stage for a remarkable performance – a symphony of human creativity, machine power and artificial intelligence. This isn’t a battle for dominance; it’s a powerful new act in the ever-evolving story of media creation. AI isn’t here to replace the irreplaceable human touch – it’s here to become a vital partner in the creative trilogy.
A Legacy Built on Teamwork
Creative and colorful light bulb sketch.Media creation has always been a collaborative effort. From the invention of the camera, a physical device capturing light, to the editing software running on computers, each technological leap has been a testament to the human-machine partnership. But the heart of storytelling has always resided with the human – the guiding eye behind the lens, the hand shaping the narrative, the emotions poured into every scene.
AI: The Amplifying Force
In this evolving landscape, AI emerges as a transformative force. It acts as a powerful amplifier, not just for machines, but for human capabilities as well.
The AI Conductor: By analyzing real-world recordings, AI can automatically remove background noise for crystal-clear audio or adjust levels for optimal listening across devices. It can even isolate specific instruments or voices, giving human sound editors complete control in post-production.
The Smart Editing Assistant: Editing suites are poised for a transformation. AI co-pilots will analyze footage captured by physical cameras, suggesting edits based on pacing, emotional impact and storytelling principles. This frees human editors to focus on the more nuanced creative decisions – crafting a cohesive narrative flow and injecting artistic flourishes.
The VFX Muse: Special effects creation, often a time-consuming process, can benefit from AI’s analytical prowess. AI algorithms can generate realistic green screen backdrops that seamlessly integrate with physical sets. It can suggest potential effects based on the content and desired mood or recommend techniques for creating flawless transitions and natural-looking compositing, empowering human artists to bring their visions to life.
The Personalized Bard: AI can personalize content delivery based on individual preferences gleaned from user data. It can optimize content delivery for smooth playback on any device, ensuring a seamless experience. Real-time captioning and translation powered by AI can break down language barriers and expand audience reach, catering to a global audience with the help of physical broadcasting machines.
The Human-Machine-AI Symphony: A Shared Stage
The success of this AI-powered future hinges on a shared stage. Let’s highlight the importance of seamless integration between AI, existing infrastructures and established human workflows. Human creators will remain at the helm, leveraging AI’s strengths and the power of physical machines to automate tasks, analyze vast amounts of data and identify patterns. This frees them to focus on the irreplaceable human touch – crafting compelling narratives, making strategic content decisions and injecting the emotional core that resonates with audiences.
Beyond the Show: A Future Filled with Potential
The impact of AI extends beyond the studios. It has the potential to democratize media creation by empowering individuals and smaller production houses. With AI-powered editing software and cloud-based rendering farms (which themselves rely on physical infrastructure), professional-quality content creation becomes more accessible. Additionally, AI-driven content analysis can provide valuable insights for audience engagement and content optimization, leading to a more data-driven approach to storytelling within the human-machine-AI trilogy.
The Journey Into this AI-powered Future Requires a Collaborative Composition
Education and Training: Upskilling the workforce is crucial. Media professionals need to understand AI capabilities and limitations to leverage them effectively within existing workflows alongside the machines they utilize.
Open Dialogue and Collaboration: Fostering open communication between developers, policymakers and media professionals is essential. Discussions on ethical considerations, responsible development practices and the establishment of industry-wide standards for AI integration in media creation are key to a successful human-machine-AI symphony.
Investing in Innovation: Continued investment in AI research and development that considers the human-machine partnership is vital for responsible and effective integration within the media landscape. At NAB Show 2024, advancements in AI tools specifically designed for media creation applications could encourage further innovation and exploration.
Bridging the Gap Between Generative AI and Real-World Storytelling
While generative AI often focuses on its potential to create content from prompts, NAB Show takes a different approach. The focus here is on “real-world” applications – the tangible tools that media organizations use every day: cameras, editing software, rendering engines, asset management, playout and distribution channels. These elements remain crucial for crafting compelling narratives. NAB Show differentiates itself by demonstrating how generative capabilities, including AI, can meet the needs of the storytelling community, operating within the confines of real-world production environments.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The 2024 NAB Show brings together senior media practitioners to discuss where AI has found its place (and where it hasn’t) in media pipelines.
April 11, 2024
Posted
April 11, 2024
“Star Trek” and the Strange New Worlds of Spatial Computing
TL;DR
Once confined to the realms of Star Trek and the visionary mind of its creator Gene Roddenberry, the holodeck is edging closer to reality, according to Jules Urbach, founder and CEO of cloud graphics company OTOY.
Growing up in Los Angeles with the Roddenberrys as neighbors, Urbach is best friends with Gene’s son Rod, the CEO of Roddenberry Entertainment. His cloud graphics company OTOY has teamed with the Roddenberry estate and Apple to deliver new 3D and interactive experiences for users of Apple Vision Pro.
On Tuesday, Tuesday, April 16 at 3:00 PM at NAB Show, Urbach will share the making of “The Archive,” a multi-decade collaboration between OTOY and Roddenberry Entertainment that aims to capture Gene Roddenberry’s lifetime of works with historical accuracy and holographic immersion.
Urbach will also reveal long-term plans to build the tools that will enable spatial content creation experienced through headgear and, eventually, as holograms in a holodeck.
Imagine stepping into the Holodeck, a concept once confined to the realms of Star Trek and the visionary mind of its creator, Gene Roddenberry. That future is edging closer to reality, according to Jules Urbach, founder and CEO of OTOY, who will showcase the latest developments in the technology at NAB Show.
These breakthroughs are “a major step towards realizing that goal,” Urbach says. “This is an exciting inflection point, and we are just at ground zero.”
Urbach’s cloud graphics company has teamed with the Roddenberry estate over many years and, more recently, with Apple, to deliver new 3D and interactive experiences for users of Apple Vision Pro, and will share their findings at the NAB Show session “Boldly Go: Star Trek’s Voyage in the Age of Apple Vision Pro.” Part of the Core Education Collection: Create Series, the session will be moderated by entertainment industry futurist Ted Schilowitz from 3:00-4:30 PM on Tuesday, April 16 in Rooms W210-W211.
Urbach grew up in Los Angeles with the Roddenberrys as neighbors, and he is best friends with Gene’s son Eugene “Rod” Roddenberry, the CEO of Roddenberry Entertainment. Urbach says it is been a long-term vision of his not only to recreate Star Trek experiences with full visual immersion, but to build the tools that will enable spatial content creation experienced through headgear and, eventually, as holograms in a holodeck.
“One of the fundamental messages I want to talk about at NAB is that there is a long term plan to find a way to create the tools that will eventually render a holographic display inside of a room — and we will get there,” Urbach says.
Urbach says was inspired to start OTOY by Jon Karafin, who has vast experience in holographic engineering, including at specialist camera developer Lytro and as CEO of Light Field Lab. Karafin — who will also appear on the panel at NAB Show — is already commercializing panels capable of holographic display. Urbach was so impressed that OTOY invested in Light Field Lab.
“You could cover a wall with these panels today as a step toward the holodeck. This is the future,” Urbach says. “Costs will come down and technology will advance.”
At NAB Show, Urbach and Roddenberry will present OTOY concept videos and documentary films from recent Roddenberry Archive releases, which have been remastered specifically for the Vision Pro. These include unique interviews with George Lucas and Stan Lee exploring Gene Roddenberry’s influence on Star Wars and Marvel.
“There is nothing else like it out there,” Urbach says of the Vision Pro. “The quality of ray tracing and resolution is incredible. And this is just Version 1. Just think of where we are now with the iPhone 15. There is so much potential.”
He points to the Universal Scene Description format as an important standard for building the ecosystem of spatial content creation. OTOY is a member of The Alliance for OpenUSD — a group formed by Pixar and Apple — which is working to standardize the USD format across the industry to help artists and developers create and deploy complex real-time 3D experiences at scale.
Featured in the Roddenberry archive’s native spatial content for Apple Vision Pro are 1-meter by 1-meter light field cubes that are displayed at 90-frames-per-second in 4K resolution per eye, pushing the boundary on visual fidelity.
“We can scan an actor in 3D, storing that spatial content in a mezzanine format then render at any resolution. This could be to the Vision Pro or a holographic display; the output is different, but the fundamentals are the same. You can move through the video seeing light and reflections with fully path traced real time lighting, including dynamic glossy reflections and shadows, bringing new levels of photorealism to immersive content. This is what we have cracked.”
Urbach recalls interviewing his friend’s father, at age 12 for the school newspaper. “It’s been a dream come true to digitize all of Gene Roddenberry’s inspirations, interviews and concept artwork into the archive,” he says.
“There are things in the archive that blew me away, such as the notes that Gene shared with [science fiction writer] Arthur C. Clarke.”
When the original television series was canceled in 1969, Roddenberry lobbied Paramount Pictures to continue the franchise through a feature film. He was encouraged to pursue this by Clarke, who had seen his own short story made into 2001: A Space Odyssey by Stanley Kubrick in 1968.
“Without the inspiration and encouragement of Arthur C. Clarke, the continuation of Star Trek might not have happened,” he explains.
Urbach, Roddenberry and Karafin will be joined by Richard Kerris, NVIDIA’s head of Media & Entertainment, to highlight how diverse technological advances — ranging from light field displays and virtual production to generative AI and decentralized computing — are transforming the media and entertainment industries.
When considering where we are today with spatial entertainment experiences and discussions about the holograms — which are no longer hypothetical — it is worth recalling Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
New research reveals that consumers are eager for more advanced augmented and mixed reality experiences (particularly if Apple is involved).
April 10, 2024
Posted
April 10, 2024
How Documentary Filmmakers Can Best Leverage AI
TL;DR
Award-winning documentary producer and picture editor Megan Chao says the emergence of artificial intelligence has evolved at an accelerated pace,but the long term impacts have yet to be studied or seen.
The ethics surrounding AI and its usage tolerances have challenged the basis of the work of documentary filmmakers as contributors to the historical record.
Artificial intelligence is set to transform the Media & Entertainment industry. While generative video and voices will soon disrupt scripted film and television, documentaries will always be defined by their key elements — they’re real. But that doesn’t mean AI can’t help unscripted creators tell better stories, faster and cheaper.
Award-winning documentary producer and picture editor Megan Chao, VP of development and production at Birman Productions, Inc., has a successful track record of managing diverse creative teams and strategic partnerships, while spearheading projects from inception through show delivery. She is currently the showrunner for Eden, a global environmental sustainability documentary series, and is directing an untitled social documentary feature about the street harassment of women in public spaces.
“Artificial intelligence has certainly been front and center for us working in various facets of media, and it has evolved at an accelerated pace,” says Chao. “The emergence of so many technologies, programs, and plug-ins have emerged to capitalize on streamlining and maximizing efficiencies in our workflows, but the long term impacts have yet to be studied or seen.”
AI Ethics for Documentary Filmmakers
For documentary filmmakers, “the ethics surrounding AI and its usage tolerances really challenges the basis of our work,” she continues. “We have an unspoken agreement with our audiences that the projects we create and publish have been vetted for accuracy, in facts and portrayals. We are contributors to historical record. When documentarians begin to engage generative AI in the creation of their projects, without appropriate disclosures or transparency about their practices, we start to erode at that audience trust. This becomes even more problematic when machines are learning from this published material, using inaccurate content to generate more content.”
Entertainment value is a key performance indicator, Chao says, but documentary and nonfiction program creators, programmers and distributors “have the added and ultimate responsibility for protecting the craft from this threat to our human record. If we think that social media is responsible for the proliferation of fake news, imagine how generated AI content will throw additional fuel on the fire. As generative AI becomes more sophisticated, it will become harder to distinguish fact from fiction.”
In addition, she says, the marketplace has been experiencing and an overall contraction, “leading to layoffs, smaller budgets for projects, leaner production and post production teams, and fewer opportunities. This has opened the door for the deployment of artificial intelligence by companies who are looking for cost savings and task efficiencies, and I think the implications have yet to be fully understood or clearly defined.”
While technological advancements should absolutely be celebrated and championed, Chao says, “greater emphasis needs to be placed on the value of human creators. People are the ultimate innovators, and we don’t want to squash that spirit or be suppressive in the ways that we work, especially if new, inspiring creations emerge.”
But that doesn’t mean that nonfiction and documentary producers shouldn’t consider how AI can help them, Chao cautions.
“There are so many points within the production and post production workflows where it’s eliminating mundane tasks, fixing errors from the field, or refining quality of output. But I think we are at a crossroads, where it becomes imperative for best practices and guardrails to be more clearly defined and understood in the nonfiction genre, and perhaps overall for the media industry,” she says.
“We need to put some of this social responsibility on the corporations that are developing the very tools that help us. Conversations need to go beyond profit sharing and responsibility to shareholders, and need to, instead, foster the respect and appreciation for the creators who ultimately come up with the award-winning programs that drive viewers to consuming our content. We need to continue putting pressure on entities working with us to uphold the ethics, morals and values behind our work.”
Lessons for Media Pros
Chao says she wishes that media pros could better understand “the hoops documentarians jump through for their craft,” adding, “We are scrappy, multi-crafted individuals who have chosen a career of passion and dedication.”
Documentarians, she says, are unusual from other filmmakers because they can spend many years on a single project. “But it’s a privilege to be able to tell the stories of people from all walks of life, past and present, to uncover new truths, break barriers, uplift communities and inspire social change through our work,” she continues.
“We don’t get the luxury of scripting what we capture out in the field or controlling the situations that contribute to our storytelling. The pathways we take and the business models we employ to tackle this kind of storytelling takes many shapes and forms, and the sacrifices for the love of the craft can sometimes come at a greater cost, personally and professionally, than for those working in other genres.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
At NAB Show, Chao encourages attendees to seek out information about cloud-based workflows, automation in field production, and automation in post-production.
“They should be finding out the ways that these technologies can help save them time and money, since time is money,” she says.
“For example, if footage from the field from halfway around the world can be safely transmitted back to producers and picture editors at home base, they can get a jump start on familiarizing themselves with the content to start post production before the field producers return. Or production and post production teams can collaborate from virtually anywhere with access to cloud-based technology. Or if travel budgets are tight, directors might be able to conduct interviews remotely in a way that still feels intimate and personal. So many solutions exist and NAB is certainly the place to experiment.”
Get ready to witness the trial of the century as AI faces charges for Conspiracy to Conformity in Content Creation.
April 19, 2024
Posted
April 10, 2024
How to Think About AI in Entertainment and Not Short Circuit
Order vs. Chaos
Just one year after OpenAI turned the world upside down with GPT-4, NAB Show brings together senior media practitioners to discuss where AI has found its place (and where it hasn’t) in media-making pipelines.
It’s important that show attendees “take away a clear understanding of what’s real right now and what’s coming in the near term, rather than what is aspirational,” explains Andy Maltz, principal at General Intelligence, who will moderate the session “Navigating the AI Revolution in Entertainment,” held Sunday, April 14 at 11:00 AM.
“While it’s understandable that there’s anxiety, I don’t think panic helps anytime, anywhere. We’re going to take a rational, dispassionate, educated, and long-term view of AI’s strength and weaknesses.”
The industry executives joining Maltz come from a wide range of media- related verticals — motion pictures, television, streaming, and sound — and will offer an objective view of the real-world impact of AI.
They include Bill Baggelaar, independent media technology consultant; Rick Hack, head of Media & Entertainment Partnerships, Intel; Melody Hildebrandt, CTO, FOX; and Scott Rose, CTO at VSI.
“I think it is fine to be concerned, if it heightens your awareness,” Maltz says of the frenzy around the topic. “Anyone who ignores what is happening does so at their own peril. At the same time, it doesn’t mean that every job in media creation and distribution will be replaced by artificial intelligence.
“After all, when the word processor was invented, it did not eliminate great novelists; it helped more of them to achieve.”
Panelists will share views on content authenticity, which Maltz says is of particular importance for news. “We need ways of validating whether an image or sound clip is real or fake and then mechanisms that demonstrate that clearly to anyone who consumes them.”
There are also issues around how data and content should be licensed for the training of AI models. Both issues can be treated separately but also, of course, overlap.
A Long View on a New Technology
Maltz has over four decades of experience instigating, developing, and deploying technologies in the media and entertainment industry.
He currently serves as chair of ISO/TC 36 Cinematography, the motion picture industry’s international technical standards body. His previous roles include the founding head of staff of the Academy of Motion Picture Arts and Sciences’ Science and Technology Council; founding board member of the Academy Software Foundation; founder and CEO of digital cinema pioneer Avica Technology; and co-initiator of the Advanced Authoring Format, an industry-standard media authoring interchange format.
He views AI through this lens and says it is another in a long line of technologies that the industry has initially feared before co-opting, standardizing and using to its benefit.
“If you go back to the beginning of motion pictures, there has been constant technology-driven disruption. What typically happens is that when innovations are introduced, there’s a wild west of experimentation and use.
AI in Context
“For example, there were many different film formats around before the industry standardized on 35mm. There were different competing systems when color came in. The same with digital sound, digital VFX, early digital cinema systems, digital workflow and so on.
“What’s different now is that the AI innovation touches just about every discipline within media, creation and distribution. AI and related technologies are everything everywhere all at once, and we are in that wild west phase now.
“But — and I’d even put money on it — we’re probably going to get to the same place we’ve always ended up — which is an agreement on technical interfaces and data formats and business practices. That is necessary in order to continue to have a thriving $40 billion global motion picture industry.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Amid fears over the use of generative AI in Hollywood, artificial intelligence seems less likely to replace humans than to assist them.
April 10, 2024
Posted
April 10, 2024
“Penelope:” Innovating Color Workflows in the Cloud
TL;DR
Panavision-owned post facility Light Iron developed and deployed an all-cloud workflow that was used to complete dailies, VFX pulls, conform, and color on “Penelope,” a new episodic television project from Duplass Brothers Productions.
The project has been added to the MovieLabs 2030 Vision Showcase as an example of cloud-based workflows.
The workflow focused primarily on enhancing efficiency and reducing production costs, and “significantly accelerated” the availability of dailies for creative and editorial teams.
Cloud costs remain amajor concern for the whole process, however, and the playback of raw camera files from the cloud was not a seamless experience.
One of the few areas of the post-production workflow yet to be convincingly replicated in the cloud is that of color management and grade. In part, that’s because over concern that what one person sees on a calibrated suite might be the same as that viewed by a client reviewing the images from another place.
Control over color is of prime concern to filmmakers and particularly cinematographers worried that their authorship of the image on set won’t be translated all the way through the pipe to screen.
Now there’s a real-world example of an end-to-end cloud-based color workflow to refer to. Organized by Panavision-owned post facility Light Iron and using tools from a number of partners, not least of which was AWS Cloud storage, the results even have MovieLabs purring about the advances such a workflow can bring to movie and TV production everywhere.
The workflow covers everything from capture to post-production; camera footage from a live production shoot was ingested into cloud storage, while dailies, conform, color, and VFX pull services were performed using cloud workstations acting directly on that ingested media.
Penelope itself is directed by Mel Eslyn, and follows its eponymous 16-year-old lead character (played by Megan Scott) as she ventures into the wilderness and attempts to build a life outside of contemporary civilization. Nathan M. Miller was cinematographer, and Light Iron colorist Pat Fitzgerald performed the final color grade.
This was no experiment. The cloud workflow would not have been possible on this real production without the backing of indie producers Duplass Brothers Productions.
Moving color and dailies processes into the cloud “better aligns with a more ‘run-and-gun’ style workflow and allows more accessibility at different levels,” says Alex Regaldo, head of production for Duplass Brothers Productions. “We had an editor based in Seattle, but our director and assistant editors were in LA. Even though we were in different locations, we could all be working. So, using the cloud made the most sense. It’s just more our vibe.”
An earlier version of the workflow was used on the show Biosphere, which piqued interest from both parties in continuing to develop an advanced, cloud-based process.
MovieLabs goes into detail about the workflow if you want all the nuts and bolts. We’ve summarized the benefits and also the areas where the technology behind the workflow needs a little more work.
First some context: The standard post-production method over the last decade has included “lots of less-than-optimal workflows,” such as multiple terrestrial data movements, project file management, asynchronous status updates, and reviews, emails, phone calls and waiting.
Dailies typically take 12-16 hours to turn around, usually with a dedicated overnight operator for every show. VFX pulls similarly require a team of people to find, load, and transcode every submitted shot list manually, with a delivery window averaging 24-48 hours per request. Even the conform process averages a week of work due to the sheer amount of time needed to wrangle hundreds of thousands of frames from archival tape, VFX vendors, titling, and graphics facilities, etc.
“The reliance on manual and segmented workflows not only contributes to operational bottlenecks but also increases the likelihood of errors, miscommunications and data loss,” says MovieLabs.
“Each transition in the production phase requires considerable manual effort, further exacerbating the risk of inaccuracies that necessitate time-consuming corrections and redeliveries. The industry has become numb to this inefficiency simply because it has been the status quo for so long.”
Cloud technology, internet connectivity and software tools now exist, to change all this up.
About the Workflow
Light Iron designed a workflow where the original camera files (OCFs) moved to an AWS cloud as fast as possible and stayed there for the duration of the project. All versions, renders, metadata, and other content were created in the same cloud storage location for immediate consumption by the production team, editorial team, and other post-production vendors.
All applications needed for the dailies and finishing phases of the show were co-located in the cloud with the media and metadata, which could be read directly without the need to make intermediate copies. These applications, including FilmLight’s Baselight and Colorfront’s Express Dailies, were remotely driven by editors, colorists, and other operators out of Light Iron offices in Los Angeles and Vancouver, which were fully outfitted with all the necessary monitors, control panels, and test signal devices required for a high-end finishing project.
Other technologies utilized in the workflow included AWS’s S3, EC2, and FSx, Colorfront’s Express Dailies (ExD) and Transkoder, FilmLight’s Baselight, Light Iron’s Galixy, and Streambox.
To monitor the video output, Light Iron used Colorfront Streaming Server, which streams compressed feeds directly to a remote as a color accurate 10 bit 4:4:4 signal. This enabled the dailies colorist to validate that day’s work on a local, calibrated, OLED monitor.
The Benefits
The workflow focused primarily on enhancing efficiency and, ideally, reducing production costs and such it was a success. For instance, it was found to “significantly accelerate” the availability of dailies for creative and editorial teams, as OCF in the cloud was often available for processing on the same day it was shot.
“This kind of speed is becoming a crucial advantage in post-production pipelines, given the dynamic and fast-paced nature of contemporary productions,” Light Iron said.
It also meant the show’s editors could begin an early cut of a scene even before the production crew finished shooting at a given location, which in turn allowed work on visual effect shots to start much earlier than is typical.
“These incremental improvements, when implemented through the entire pipeline, had a marked effect on the length of shooting schedules and the number of hours spent by stakeholders ‘waiting around’.”
Additionally, having a single source of truth (the OCF bucket in the cloud) for each part of the process was said to eliminate errors in transfer and copy between various storage volumes (as those steps no longer exist).
“The time and cost savings from this alone was pronounced for a post-production facility, as additional storage volumes did not need to be purchased, additional I/O hours for manual copies did not need to be logged by an operator, and the need for round-the-clock troubleshooting support was greatly reduced.”
All combined, these benefits optimized resource utilization, accelerated project timelines, and reduced production costs.
It is a shiny case study for the MovieLabs 2030 Vision, too. “[Penelope] demonstrates the benefits of an all-cloud workflow, designed from the beginning to ingest both on-set proxies and OCF straight to the cloud. Collaborators anywhere in North America successfully used machines based in the US-West region with no latency impact,” they state.
“Processes that used to take hours, days or even weeks now can happen in real time by spinning up compute instances in the cloud with the media instead of waiting for media to be moved to the applications.”
However, as MovieLabs caveats, “although the workflow required caching for the object store to provide sufficient performance for color workflows, the seamless integration on top of AWS S3 minimized any chance of creating rogue copies of data.”
The Gaps In the Chain
This was not quite a full camera-to-cloud pipeline, and issues were identified that could be improved on.
For example, original camera files (OCF) were stored on hard drives and transported to the facility once or twice daily, as per normal. While lower resolution capture proxies were uploaded from set to Frame.io in real-time, the size of the raw media, and the bandwidth limits of the on-set satellite uplink, necessitated this more traditional path for the full resolution files.
In addition, even when OCF was uploaded to the cloud from the facility, real-time playback directly from the OCF stored in the cloud was not possible, “due to the reality of today’s cloud and bandwidth infrastructure.”
Instead it was necessary to use caching, a technique to create temporary local copies of files, to ensure adequate performance for color review. This process wasn’t seamless because caching only occurred once the file was first accessed.
“After creating a fully conformed timeline, the editor had to play it all the way through to ensure that all the media was properly copied before turning the timeline over to the colorist.”
Cloud costs were also “a major concern for the whole process and required careful management to make sure the cloud resources were used efficiently and effectively,” said MovieLabs.
“This workflow demanded a considerable amount of expensive FSx storage to accommodate the large, single file RAW formats used on the production. While storage costs continue to decline, Light Iron says it is providing special emphasis on smart resource scaling in future applications and automated pipelines.”
What Comes Next?
Future “color in the cloud” projects will look to third parties for enhancements in the underlying technology stack.
A number of third-party vendors are working to enable a more seamless if not real-time playback of shared file systems within AWS and other cloud providers. These are expected to offer similar performance and usability to traditional on-premise SAN systems, says Light Iron.
Another major improvement would be a vendor interface for interacting with virtual workstations in AWS. “Cloud-based production workflows would benefit from an abstraction layer that could serve as a user dashboard for interfacing with the virtual machines.”
Light Iron further expects to explore abstracting cloud workflows “away from heavy reliance on one cloud vendor,” saying, “there are clear potential benefits to picking and choosing services from different cloud vendors based on cost and functionality.”
Light Iron believes that taking a multi-cloud approach will allow better service for studios and production companies who have chosen to contract with other providers for their storage or compute needs.
“With high-speed networking getting less expensive every day, there are fewer roadblocks to spreading an uncompressed finishing workflow across multiple providers.”
The next challenge to tackle is performing creative Finishing VFX work in the cloud. This also requires low latency and high color accuracy, but VFX work demands even higher-performance systems with significant rendering capabilities and storage speed.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
VFX supes Jabbar Raisani and Marion Spates and Company 3 colorist Siggy Ferstl to discuss “Avatar: The Last Airbender” at NAB Show 2024.
April 10, 2024
Posted
April 10, 2024
The Balancing Act of Cloud and On-Premise Storage
TL;DR
Cloud storagesolutions are increasingly integral to modern media asset management systems, whetherit’s the entire solution or working in conjunction with on-premises storage pools.
Broadcasters are finding an optimal balance between on-premise and cloudstorage by leveraging the strengths of each to suit their specific workflow needs,enhancing both efficiency and collaboration.
Scalable and cost-effective storage strategies are increasingly relying on hybridmodels, employing both cloud and on-premise solutions to meet growing contentdemands without compromising on performance or cost.
Hybrid storage setups, combining on-premises and cloud solutions, are becoming more scalable and future-proof, according to a cross-section of media asset management solutions vendors reporting to NewscastStudio’s Industry Insights series. Costs and migration between tiers still need to be carefully managed.
The integration of cloud storage solutions into MAM systems facilitates remote and collaborative workflows and ensures scalability and cost-effectiveness in media production.
Here are some key takeaways from the Industry Insights Roundtable:
Jonathan Morgan, SVP for product and technology at Perifery, says, “While the M&E industry remains full of companies building islands of storage, on-prem or in the cloud, interconnectivity between storage is now more available and more powerful than ever. Storage hasn’t been about hardware for a long time — it’s more about the power of software.”
David Rosen, VP of cloud applications and services at Sony, says, “The integration of cloud storage into MAM systems is critical and part of a broader trend towards cloud-based workflows in the media and entertainment industry, offering flexibility, scalability, and efficiency in managing and distributing media assets.”
The demand for high-resolution content is intensifying storage requirements, with proxy-based workflows and scalable cloud solutions playing a crucial role in managing capacity and bandwidth.
Aaron Kroger, product marketing manager for media workflows at Dalet, says, “The escalating demand for high-resolution content, alongside considerations like bit depth, data rates and HDR, is intensifying storage requirements around capacity, bandwidth, access and collaboration. To accommodate this surge, scalable solutions like cloud storage are essential, yet the hybrid approach strikes a balance, addressing concerns regarding egress and efficiency for optimal workflows.”
With 16K on the horizon, Pixitmedia CTO Barry Evans says there is no end in sight for enhanced media quality — especially with AR and VR. “What’s become clear is that the standard practice of buying more storage boxes (and hiring more people) to accommodate increasing file sizes is too expensive and inefficient to truly scale and smart companies are building storage infrastructure that will scale with them. That means creative uses of data management and optimization of storage tiers to manage storage TCO and keep creative teams on task.”
Media companies must balance on-premise and cloud storage for optimal efficiency, says EditShare CTO Stephen Tallamy. “Every application is different, and every user will find a different balance between ground and the cloud. If production and post are in-house, then there is little point in using the cloud (except for archiving) because it just adds cost and complications. If you are moving to remote production, then collaborative workflows are enhanced through the cloud, including new ways of working like remote editing in place.”
In 2020, the shift to the cloud was a reaction to a dramatic change in workplace structure. Now, according to Duncan Beattie, product manager for storage at Rohde & Schwarz, companies understand the importance of on-premises storage being responsible for the highest bandwidth demands, most sensitive data, and critical work.
“Modern hybrid storage workflows, which incorporate Edge and Cloud solutions, are very powerful when designed well and should align with the company’s demands wherever production takes place.”
Tom Pflaum, VP of product management at Telestream, explains that it’s most efficient to have media reside where the workflow is, meaning that not every organization employs the same mix of cloud and on-prem solutions.
“Some still rely predominantly on on-prem workflows, others have migrated significantly toward full cloud adoption, and the vast majority sit somewhere in the middle and are taking a hybrid approach.”
Going forward though leveraging cloud storage solutions will bring tremendously beneficial cost, resource, and workflow efficiency improvements “that can make-or-break a broadcaster’s bottom line,” Pflaum says.
Vizrt says its customers prefer a tailored solution with multiple on-premises and cloud object storage options in a variety of locations. “This allows content to be held, migrated, and stored most cost-effectively, while ensuring continuity of access and an ability to onboard new storage solutions when required,” shares Jochen Bergdolt, global head of MAM. “Automated rule sets can be applied to determine where content should be held, intelligently balancing the need for access whilst keeping costs under control.”
There are cost implications though — both of staying wedded to on-prem and of moving to cloud. “For new broadcast players, the costs of securing and maintaining on-premise storage mean an exclusively cloud-based model is likely their best bet,” says Bitcentral COO Sam Peterson. “In case of established players with an on-prem setup, the optimal balance may be a gradual transition as they find the right stepping stones to move to the cloud, such as a hybrid solution.”
Abe Abt, senior product consultant at AJA Video Systems, says, “Cloud storage has given broadcasters more scalability, but it comes at a high price. Facilities often use it to supplement their on-prem storage to keep costs manageable. To this end, leveraging a combination of cloud and on-prem storage with the addition of a management layer that allows users to access or at least view data on both has become a popular storage ontology.”
Unfortunately, there is no simple solution for the migration. “For the most part organizations are looking at ‘tiered’ storage models,” says Philip Grossman, VP of solutions architecture at DigitalGlue.
This is where the high-performance, lower-density storage is utilized for editing content, with lower-performance, high-density near line storage being used for longer term retention, and then, finally, either cloud or on-site LTO (tape-based) storage for long-term or archive storage.
“Due to the cost constraints associated with storing an entire archive in the cloud, as well as paying to store hundreds of thousands of hours on physical hardware on-prem, broadcasters tend to use mezzanine file formats,” says James Fraser, VP of US sales at Moments Lab.
“This means that instead of uploading large, multi-terabyte files to the cloud, they use compression technology to create a smaller version of the file in a usable format of 18 Mbps or more. Storing a mezzanine file in the cloud is a lower-cost solution than duplicating storage of the original file, and it makes the content accessible to users worldwide, any time they need it.”
That’s why an efficient data management system is the winning strategy here — according to MAM vendors.
“Maintaining assets on Tier 1 storage is convenient, but wildly expensive,” says Evans. “Having a data management system that can automatically move data to more cost-effective tiers when it’s not in use and can call it back quickly when needed is how companies will get the most out of their storage infrastructure without breaking the bank.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Caleb Ward: The Possibility and Probability of AI Filmmaking, Part 1
TL;DR
Curious Refuge CEO Caleb Ward is a strong believer in generative AI’s potential to democratize creativity. He and his wife/COO Shelby founded the platform to offer education and community to AI storytellers and aspiring AI storytellers.
Read the first part of Ward’s interview online here; check out his thoughts on genAI controversies in part two; or watch the full conversation, below.
NAB Show “has had a very important role in my career,” says Curious Refuge CEO and cofounder Caleb Ward.
Ward says he always looks forward to walking the show floor and “having conversations with people that work for all of these incredible brands” because the folks in “the booths are really knowledgeable, and they actually really want to help you.
“So I have countless conversations that I’ve started up with people at the NAB Show, and that’s echoed into either business opportunities or just creative questions that get answered.”
At this year’s NAB Show, Ward says, “I want to have conversations with a lot of these brands about how they’re thinking about AI, and how that integrates into everything from the types of features that they’re implementing and cameras, and broadcast equipment all the way to of course, the software software manufacturers.”
He adds, “But I would be lying if I didn’t say that another big part of NAB that I’m really excited about is just the side split off parties that happen. So funny to just like, you know, be in a place where everybody understands just that nerdy niche that you live in.”
This year, Curious Refuge will host a party for the AI filmmaking community at the HyperX Arena on April 15. NAB Amplify readers can purchase their tickets at 50% off using code NAB50.
At NAB Show, Ward says, “It feels like your guard is down. And you’re like talking in, like, this language that you don’t normally get to talk with other people.”
“I can’t imagine not trying to make it out every year because it really is an essential conference for us,” he explains. “If you guys run into us at the show floor, please say ‘hello.’”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“It’s very important for people to be having conversations about the impacts of AI tools” on the industries and workers, says Caleb Ward.
April 10, 2024
Posted
April 10, 2024
Caleb Ward: The Possibility and Probability of AI Filmmaking, Part 2
TL;DR
Curious Refuge CEO Caleb Ward is a strong believer in generative AI’s potential to democratize creativity. He and his wife/COO Shelby founded the platform to offer education and community to AI storytellers and aspiring AI storytellers.
Read the first part of Ward’s interview online here, or watch the full conversation, below.
In addition to creating educational curricula for AI users, Curious Refuge offers a weekly newsletter and web series, hosted by CEO Caleb Ward, to help followers keep up with the latest and most important developments in this space. It’s interesting to hear his thoughts on some of the biggest conversations about generative AI.
First, it should surprise no one that Ward isn’t overly worried about AI taking over Hollywood.
“I don’t think it’s a zero sum game,” Ward says. “I think that we’re going to see completely new AI assisted stories, and I think we’re going to have the Hollywood stories be elevated in quality because of the use of artificial intelligence.”
However, at Curious Refuge, “We believe that that transition time can be, you know, challenging,” Ward says, even while people embrace new tools and opportunities.
It’s true that “storytellers and creative people now have creative potential and the ability to tell more stories, bigger stories, and really, to take the power away from just studios commissioning you.”
For Ward, “It’s been really interesting to think about how these tools can transform each aspect of the pipeline.”
Ethics and Industry Impacts
But he acknowledges that “it’s very important for people to be having conversations about the impacts of AI tools” on the industries and workers who they’re targeted to.
In addition to recognizing the law of unintended consequences, Ward says, “It is very important to use these tools in an ethical way.”
He says, “We have a digital language in the ethics of using tools like Photoshop right now. And that’s because we’ve had a lot of experience to know the pros and the cons and to know what as a society and what as legislation and laws have popped up, as it relates to using those tools. We’ll see the same thing pop up with AI.”
While the dust may not have settled yet, Ward says, “We think you need to start with ethics before you begin creating. And you need to find out where you personally live as it relates to using these tools. Some people — they won’t even use tools like ChatGPT because of the ethical concerns that they have. And I totally understand their perspective on that. But I think it’s really important for everyone to arrive at what that means for them.”
Ward knows that his lines won’t be the same others draw. But he isn’t starting from scratch when drafting the Curious Refuge framework. “We use some visual, I guess, ethical cues from society, some cues that we see in other places, as kind of a litmus test for how we approach our AI tools.”
For example, they’ve released some parody trailers and looked to SNL for guidance as to how to ethically play in this space of representing actors and IP.
And on the flip side, Ward says, “As it relates to my work being ingested by an AI and someone being able to, you know, spit out [AI-generated] work that seems like my work, I’m fully expecting that to happen. And it did happen” when the Wes Anderson-esque video went viral.
His feeling was “Oh, that’s cool. I got to inspire somebody to create on their own.”
After all, “It’s always a back and forth with AI… You have to treat it like a creative assistant. It’s not a creative replacement. It doesn’t have taste,” Ward notes.
However, Ward says, “I know that there are a lot of IP holders in Hollywood that don’t share that same vision of a future in which storytelling is collaborative, and not just owned by certain entities. But I think that for me personally…that’s how I’m viewing AI for myself.”
Setting up the Guardrails
In addition to personal responsibility, Ward is adamant that AI toolmakers need to do their best to get it right and to understand how their actions are likely to impact the marketplace.
“I think really, the responsibility needs to go to these companies,” Ward says. “And they need to be mandated to do their ethical research around how they release these tools.”
He believes this should be formalized, as well. “These tool companies need to have ethics teams that are thinking about the bad ways in which people can use their tools and the negative impacts that it could have on society. But even then, they’re not going to get it right.
“Google released their image generator and totally botched it. Google has a huge ethics team, they really think about their steps before they take them. If they can’t get it right, I don’t really see a scenario where the companies are going to get it right before they release.”
But that doesn’t excuse them from making an effort. And some companies certainly are.
Ward points out that OpenAI likely has a Sora-equivalent AI video generator waiting in the wings. He explains, “They probably have the ability to release that right now. But they’re intentionally not doing that because they want to think about the long term impacts and how they can roll it out in a way that supports people.”
For those keeping an eye on AI regulations, Ward says, “I do not think that it is even reasonable to think that legislators would have any idea as to what is happening. I think they need to be talking with creators and with the people developing these tools, and they need to be informed in the process. I do like that we have some legislation and mandates that are requiring pitfalls and red flags to be talked about and addressed before tools are released. I think that’s very, very important.”
Nonetheless, he says, “I think there are some proactive things you can do, like the EU’s recent laws related to AI. But the government is really in a Catch-22 because if you start pumping the brakes on the development of AI tools, other countries that are less inclined to slow down the development are going to take over, and we know that a week in AI is like a year in previous industries. And so the economic cost that it could play out to your society could be just super detrimental.”
Recalling the infamous original red flag legislation, Ward says, “You don’t want to end up like the car manufacturing industry in the UK. But you also don’t want AI tools to just come through and decimate existing industries and change the way that we do things.”
How Hollywood Is Handling All This
Utilizing generative Ward says, involves “a restructuring of the way in which you understand the process of storytelling, but ultimately, I think it’s going to result in bigger stories, more stories, more diverse stories, and I think that is a good thing. But there are going to be growing pains.”
Specifically for the Hollywood studios, the obstacles are primarily “legal challenges and copyright challenges. So if a studio creates a film, using artificial intelligence, most studios are under the impression with the legal precedent that’s been set up to this point that they don’t actually own that film.”
However, Ward says, “I really don’t think the precedent is really going to continue for you to not truly own content that’s created with the assistance of artificial intelligence.” He notes that ruling was related to the creation of a wholly AI-generated image library, rather than utilizing AI tools and the back-and-forth creative prompting process.
Currently, Ward points out, all of the big studios have R&D teams dedicated to learning how to use AI, but the majority of the creative teams are banned from using it, with the exception of the studios that have created custom models. (“They’re not great at this point,” Ward says.”)
He thinks, “The industry is going to have to answer those questions, and there’s gonna have to be legal precedents that make them feel more comfortable — or studios, just probably a smaller studio is going to be brave and release their own thing, and it’s going to be a big commercial success. And the big studios are going to be like, ‘OK, here we go!’”
In the interim, Ward says, “Studios in the larger industry do themselves a disservice by not exploring these tools.”
And once studios decide to dive in, new challenges will have to be tackled.
As we all know, “Workflows with AI, they change very frequently. And the quality of the AI generations is really improving day after day,” Ward says. “So how do you build a pipeline whenever you have these innovations popping up like this? And the answer, usually, in Hollywood, is you lock in your tech.”
But will that be feasible or even practical with genAI tools? Unclear.
“I think there’s huge opportunities for those that embrace it,” Ward says. “But if you’re sitting back and you’re just too scared to touch it, or you know that that paradigm shift causes paralysis, then I would say, really think about these tools and just go into it with a little bit of an open mind to seek out what creative opportunities there might be there for you.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
TL;DR NAB Show “has had a very important role in my career,” says Curious Refuge CEO and cofounder Caleb Ward. Ward says he always looks forward to walking the show […]
April 11, 2024
Posted
April 10, 2024
The Method to the Madness in Guy Ritchie’s “The Gentlemen”
TL;DR
DP Callan Green talks to NAB Amplify about lensing “The Gentlemen,” Guy Ritchie’s action-comedy crime series for Netflix.
A boxing scene presented a major lighting challenge, requiring a 360-degree rig and precise control as the actors moved around the ring.
Green recounts how he got his start climbing the cinematography ladder by cleaning Peter Jackson’s glasses on “Lord of the Rings.”
Guy Ritchie’s signature crime caper genre wouldn’t be complete without some boxing, and scenes in Episode 6 of his Netflix series The Gentlemen presented one of the biggest challenges for the camera team on the show.
The location was a venue called The Magazine, that overlooks Canary Wharf in London. “It looks awesome but it is also just an event space with massive windows that we had to turn into a big boxing arena using a 360° lighting rig,” says DP Callan Green, ACS, NZCS, who shot this and three other episodes.
The New Zealand-born filmmaker is now established as a main unit DP, having begun his career as a clapper loader and assistant camera on Peter Jackson’s Lord of the Rings trilogy.
“I like to backlight or sidelight boxing scenes as much as possible but what made it tricky here was that we wanted to get our fight camera operator right in amongst the action on a 21mm wide lens,” he tells NAB Amplify.
“With the actors dancing around inside the ring it seemed almost impossible to keep the lighting looking good and consistent unless there was some way of rotating the percentage of the values of the LED backlight as the boxers move around in relation to where the camera is.”
The idea struck Green just a day before the lights were to be rigged, but gaffer Jack Powell and desk operator Charlie Stallard weren’t fazed.
They pixel mapped the moving lights around the ring and established the best lighting levels for the fighters. They created a grey scale blend in photoshop to overlay the pixels.
“This photoshop image would then rotate over the pixel map itself which created the smoothest possible dimming whilst rotating the light levels during the fight,” says Green.
There were nearly 2,500 instances and pixels within the lamps reacting to the change in position of the fighters. “This gave us the ability to continuously rotate as the fighters did for several rotations.”
Virtual masks were added within the levels around the ring to counteract any camera shadow. The opposing sides of the venue were lit symmetrically for aesthetics and ensured all the lights had full-color control.
Around 500 lights were rigged in total, including a truss rig directly above fighters that was the same size as the boxing ring and had three separate fixtures totaling 140 lamps.
“Impression X4 Bars were used to give a general ring backlight and ambience as the boxers moved around,” Powell explains. “Robe MegaPointes were deployed for the cage feel for the ring entrance and Fusion X-Par 12 to push lights out to the crowd during the fight.
The trusses above the crowd had a mixture of P5 wash lights to augment the ambience. Ayrton Perseo wash lights enabled background interactive lighting.
“For the ring walk we went with Chauvet Strike 4 for the background and Impression X4 on the floor, then we programmed different effects and colors to suit individual ring walks,” Powell adds.
A set of Robe Spiiders backlit the boxers in the ring from around the stage edges. They created a program to track the backlight and kill the frontlight as the fighters moved around the ring.
“We also added MegaPointes on the perimeter to allow us to flair the lens on the ring walks whenever we suited,” Green says.
“That was our biggest set to manage lighting wise as we only had a short window to rig the location. We had four days total in and out, using two rigging crews over two days, and then a pre-light day with 20 sparks and 12 riggers. Plus a derig. This was a military operation led by Farrow and installed in 24 hours.”
Inspired by the director’s 2019 movie of the same name, featuring a new cast of characters, The Gentlemen is set in the same heightened and often hilarious world of aristocrats and gangsters; one with the breeding and the birthright, the other with the brawn and the belligerence.
Although mostly real-life backdrops, the production also used Alperton Studios to create some interiors — including a council flat in Croydon, South London where Eddie (Theo James) gets physical with a goon.
Lead DP Ed Wild had created a four-page pdf detailing the show’s look and feel. Green also got to watch early cuts of the first two episodes. “I was pretty scared, watching those, since the bar was set high,” he acknowledges.
He watched Ritchie’s original film as well as Snatch and Lock Stock and Two Smoking Barrels, pulling out a few shots to “nod to” in the series. Sexy Beast, Rocky, Creed and Amelie were other cinematic reference points.
“We were given quite a lot of scope to do what we wanted as long as we didn’t get too crazy,” he says. “I collected as many stills for reference as I could find that I felt resonated with the color and tone of what we were about to do.”
The series is shot at 6K using a Sony VENICE camera equipped with Tokina Vista Primes, typically with a quarter-black satin filter, “which takes the edge off [the resolution sharpness] a little bit and gives the highlights a bit of halation.”
Sony FX3s were also used to cut into the A camera and were placed on props like guns, whisky bottles, pigeon cages, and a traveler’s caravan.
All episodes worked from one show LUT, which was versatile enough to work for night exteriors and day interiors. “When I first started working with it, it freaked me out a little because it was quite heavy and deep in the blacks and darker areas. That paid off in the long run because you had so much more information in post.”
Following his experience shooting two episodes for director Eran Creevy, Green jumped at the chance to continue shooting Episodes 7 & 8 for director David Caffrey.
“Having just come from Masters of the Air and gone on to work on Gangs of London Season 3, I feel very lucky to have had three awesome jobs in a row,” he says.
Green grew up in a suburb outside of Wellington, NZ, and began taking stills when his mom bought him a camera. When his brother got into acting, his interest in filmmaking was sparked.
In 1993 he helped shoot a commercial for a peanut butter brand “voted the year’s worst ad in New Zealand,” he smiles, but on set encountered an ARRI film camera for the first time. Asking the key grip how he could break into the industry the advice that came back was “A lot of hard work, mate.”
Green studied photography at high school and following graduation got a job as a video split operator (now known as a VTR op). Shortly afterward he found himself part of the rapidly growing local filmmaking scene jumpstarted by Weta and LOTR.
“Peter Jackson used to get me to clean his glasses for him. He was really lovely to me. He’s one of many people I’ve met along the way who took me under their wing.”
He won a place at Sydney’s prestigious national film school, leaving in 2003 with a masters in arts and cinema and never looked back. Based in London since 2015, Green’s work has included second unit work on Christopher Robin, The Witcher, Fast F9 and Fast X, as well as on all nine episodes of Masters of the Air. He also recently served as DP on four episodes of the latest season of BBC crime drama Guilt.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Director Steven Zaillian and DP Robert Elswit discuss the meticulous black-and-white visuals for Netflix psychological thriller “Ripley.”
April 10, 2024
Posted
April 10, 2024
Shadow and Light: Cinematographer Robert Elswit’s New Noir for “Ripley”
All eight episodes of Netflix’s limited series, adapted from Patricia Highsmith’s novel The Talented Mr. Ripley, are directed by writer and creator Steven Zaillian, and they’re all lensed by cinematographer Robert Elswit.
“This singular vision gives Ripley both an impressive aesthetic cohesion and a radical kind of ambition,” says Vanity Fair’s David Canfield, who has the filmmakers dissect a dozen shots from the psychological thriller.
“When we had a chance we, tried to create a kind of chiaroscuro, a feeling of very strong shadows and very strong highlights,” Elswit explains. “I kept thinking while we were shooting, ‘I’ll fix all this later.’ And we didn’t fix any of it.”
Ripley leans into the noir of the original story in a more literal way than the 1999 feature adaptation starring Matt Damon. “What follows is a dizzying saga of lust, murder, impersonation, and deception, all captured in radiant black-and-white,” says Canfield.
The story is set in Italy, including in Naples and the Amalfi Coast, and the locations were a key part of the look of the 1999 movie.
“I knew from the beginning that I wanted to have this high contrast film-noir style,” Zaillian says. “We didn’t want to do anything that was familiar to us… I didn’t want to make a pretty travelogue.”
Elswit says that lead actor Andrew Scott has such an expressive face, “it dominates the series in a way. In all the different lighting setups where we did medium close-ups and tight close-ups, it was always fun to find an interesting way of creating contrast and shadows on his face.”
For lighting co-star Dakota Fanning, photographs of Grace Kelly served as inspiration. “It’s this hot, bright contrast between light and dark,” Elswit explains of one shot of her in a police station.
A lot of the show’s action takes place in an elevator. “It’s a symbol of dread for Tom Ripley when people come up in this elevator,” Zaillian says. “It’s a very important location for us. We shot it, basically, every way you could; from inside, from outside, from low, from high. But I had something very specific in my mind. We reached a point where we I started to see ways of shooting this location in a way that could be really fascinating, with this open staircase.”
Other classic noir lighting shots included looming giant shadows cast onto a wall, recalling Orson Welles’ entrance into the film The Third Man.
They used half lighting to evoke the idea that Ripley is two people almost all the time. In other shots Ripley is in total silhouette: “You know exactly what he’s thinking without seeing his eyes,” Zaillian says.
They also used the texture of buildings and cobblestones in ways that have been done since the 1920s. Zaillian added, “It doesn’t look nearly as interesting, by the way, in color, it just doesn’t.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Director Todd Haynes uses stylized visuals to examine how little people understand about perception (and reality) in “May December.”
April 9, 2024
AR, VR and MR Storytelling: It’s a Whole Different Visual Language
TL;DR
Spatial storytelling has yet to take off, but advances in hardware and trailblazing storytellers like Emmy Award-winning immersive director Michaela Ternasky Holland are changing that.
“Nobody has it figured out,” Ternasky Holland says, noting that the industry is still in its early days and that people are still making mistakes as well as good work.
She says the “beauty” of XR storytelling is that it can proximate the embodiment of day-to-day life in ways that traditional mediums are unable to replicate.
The rise of virtual reality storytelling is inevitable due to the fact that human beings are innately tuned to spatial 360-degree three dimensional experiences. Slowly but surely the hardware and the development ecosystem is catching up with the storytelling possibilities that are already being explored by trailblazers like Emmy Award-winning immersive director Michaela Ternasky Holland.
“Nobody has it figured out,” she says. “It’s early days for the medium and people make mistakes, as well as good work. But because the hardware keeps improving, we’re going to start to see a rise in spatial storytelling.”
She adds, “We all know what it’s like to move in space from the moment we are born. The beauty of XR storytelling is that it can proximate the embodiment of what you feel in your day-to-day life in ways that you would never be able to replicate with a traditional medium.”
Ternasky Holland and fellow directors, producers, and creators at the NAB Show session, “Creative Lens on Compelling Content: Artistic and Commercially Successful AR, VR, and Mixed Reality,” have been working with emerging technology since the early days. They have already shattered boundaries to deliver impactful, immersive, and commercially successful experiences in AR, VR, and mixed reality (collectively categorized as XR), and will share the secrets of crafting world-class experiences that captivate audiences while generating profit.
“A lot of people who would love to do this type of work but perhaps feel that it doesn’t have a distribution platform like traditional film/TV or there’s not really a clear publishing platform, like traditional journalism,” says Ternasky Holland. “It’s true that there is not yet a robust sales pipeline but that doesn’t stop great work from being made.”
Allan-Blitz was the first VR director for Time magazine, and has since partnered with Van Jones to create The Messy Truth VR Experience, starring Marvel actors, and also directed the first AR short film for Disney+ called Remembering.
Named “The Godmother of Virtual Reality” by Engadget, The Guardian and others, De La Peña is now working as the founding director at Arizona State University’s center for Emerging Media and Narrative. As founder and CEO of Emblematic Group, she uses cutting-edge technologies to tell stories — both fictional and news-based — that create intense, empathic engagement on the part of viewers via immersive virtual, mixed and augmented reality.
She has been on the cover of the Wall Street Journal magazine as a WSJ “Technology Innovator of the Year” and Fast Company named her “One of the People Who Made the World More Creative” for her pioneering work in immersive journalism, a field she is widely credited with establishing. She is also one of CNET en Español’s 20 most influential Latinos in tech, and a Wired Magazine #MakeTechHuman Agent of Change. A former correspondent for Newsweek, she has more than 20 years of award-winning experience in print, film and TV, and her virtual-reality work has been featured by the BBC, Mashable, Vice and Wired.
Krueger is head of production for the Metaverse Entertainment Team at Meta, and is constantly working on how to translate big commercial IP into the VR space, with projects ranging from Wallace & Gromit to Darth Vader.
Ternasky Holland herself is an Emmy winner for her work creating the first VR climb of Everest for Sports Illustrated. She has partnered with Meta to create multiple VR projects, examples of which she will present in the session. These include the reimagined series made in collaboration with writer-director Julie Cavaliere, which revisits lesser-known folk tales and animates them for VR.
“We don’t think of things the same way as a filmmaker or a journalist does. We almost think like a choreographer,” she says.
Questions around camera placement and location as a character will sound familiar to a filmmaker, but in the context of 360 video the answers will be very different.
“What is the activity around the camera?” poses Ternasky Holland. “We’re not just thinking about creating amazing environments, we’re also thinking about potential interactivity and how the camera moves through 3D 360 space. We’re thinking about how people are going to be depicted and whether they’re going to exhibit the ‘uncanny valley’ as avatars or whether we’re going to capture them in real time with volumetric 3D video,” she says.
“What we’re really trying to do is define a new media of creativity and a new visual language for storytelling.”
During that production they found a stronger reaction among viewers to the 3D immersive version of the film than a similar 2D animated version.
“VR is not necessarily an empathy machine, it just lends an immersion quality that heightens emotional responses. In both 2D and immersive 3D cases the participants felt connected to the story and connected to the characters, but just on that level of emotional impact we saw the difference VR can make.”
One goal of the panel is to act as a rallying cry for others to come and explore XR. “You don’t have to turn your back on traditional mediums in order to be a part of this industry. Nor do you need to have a technical background. From VR animation platforms to games engines there are so many products helping to make XR experiences accessible to people without an engineering or coding development background,” she says.
“We want to see an industry that has true diversity and inclusion. We want artists, creatives, and storytellers and we want producers, we want good lawyers to build up an ecosystem for XR experiences.”
She describes XR as a sandbox environment where serendipitous events should be allowed to happen. “Traditional filmmakers come from a background where they control and edit every single frame and as a result they are constantly in control of the viewer. There is no denying there is tremendous power in that but if you want to get involved in this more immersive, interactive storytelling landscape then you have to let go of that control.”
She adds: “This medium is more like immersive theater where you create the rules of the world but you have to recognize that your audience has agency and will make decisions based on what interests them.
“That is both the secret sauce and the powerful part of this process. If traditional mediums are a passive experience, XR allows people to explore the embodiment of being inside the story.”
The latest advancements in VR/AR headsets such as the Apple Vision Pro are unlocking a new era of immersive experiences, but Ternasky Holland is cautious about jumping too soon.
“I always hesitate to say the word ‘explosion’ because it seems we’ve been on the cusp of that for the last 10 years. Every new headset that comes out makes a slight improvement whether that’s in the weight distribution of the headset or in the pixel count of the display. We now have external cameras streaming live video into your vision for better mixed reality and with the latest headsets we can take advantage of eye-tracking technology,” she says.
“I do think though that one day we will all have some sort of mixed reality headset, whether that’s for work, similar to the way we all have laptops, or whether it’s an entertainment device,” she continues.
“I don’t think we’re going to get there any time soon, but that’s just fine because I’d prefer us to slowly build and grow the hardware and the content in parallel to be able to manage expectations. For me it’s a little less about an overnight change and more of a gradual transition.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Designer, strategist and worldbuilder Rachel Joy Victor employs “spatial economics” to explore how AI can revolutionize storytelling.
April 9, 2024
MrBeastYouTube President Marc Hustvedt Predicts the Next Phase of the Creator Economy
A giant of content creation meets broadcast and cable TV head-on at NAB Show, where the president of the largest YouTube channel in America will enjoy challenging the audience.
At 137 million subscribers and counting, MrBeast is not only the largest YouTube channel in America, but the fourth biggest in the world. It is spearheaded by nearly 26-year-old creator Jimmy Donaldson, whose content revolves around extreme and expensive stunts, reaping several billion views on social channels.
MrBeast YouTube is already a huge media business, expanding internationally with dubbing into more than 12 languages, including Mandarin, and side ventures like Feastables and MrBeast Burger, which contribute an increasing slice of revenue.
Hear the Top 5 Trends in the Creator Economy
At NAB Show, Hustvedt will explore the current state of the creator economy, the content strategy behind creating viral sensations, landing lucrative sponsorships, and more. In conversation with Jim Louderback, editor and publisher of Inside the Creator Economy, Hustvedt will idenify The Top Five Trends in the Creator Economy.
What, for instance, are the key things that MrBeast optimizes in order to keep millions of viewers coming back for more?
“You are trying to continually optimize to keep the audience stimulated and excited,” Hustvedt said in a YouTube chat with CreatorIQ. “Particularly with our product, we don’t put in a lot of what people from traditional entertainment think is valuable. Like the little simplest thing: you’ll rarely see an establishing shot from us. If we’re going into a Walmart or something, you don’t need to show the outside of a Walmart and then go into it, but television was sort of trained in that way.”
He added, “In terms of value, obviously you have to deliver on the promise of what the viewer expected. We sort of train internally to think about where in the video that is going to go.
“Let’s say there’s this giant spectacle with an airplane shooting off fireworks, and it’s going to cost a lot of money. It actually really matters where that is in the video, because if we’re looking at where we need to spend, we’re typically going to bias towards the front side of the video. There are little things about the literal time of the video and the progression of the video that are super important to think about.”
MrBeast is equally smart at integrating brand partnerships into stories. “They’re not interruptive ads — it’s so weaved into the story,” he said.
That’s in contrast to some content creators who simply read out a sponsorship message so it actually feels like an ad break. “The viewer is conditioned to hit the skip button — it’s literally built into every interface,” Hustvedt said. “Knowing that, how do we make it so that these are not skippable because you’re so invested in the story that’s going on that you just need to see what’s going on in the background?”
While MrBeast grew the business on YouTube, the creator team have made a concerted effort to grow the brand’s presence on TikTok.
“You can ignore it, or you can lean into it,” said Hustvedt. “On one level, it’s like an existential threat to our long-form YouTube business. If our long-form YouTube business has a couple of million dollar ad integrations in the middle, each video is a multimillion dollar piece of business, but it’s on a 12- to 18-minute video. [TikTok is] this ultra short-form content.”
There are different types of creator, Hustvedt says, explaining to FastCompany how some excel at retention-based editing, while others are really good at marketing themselves and building hype. “And some are just incredibly good at oozing authenticity and being themselves.”
Before MrBeast
Hustvedt’s own resume prior to MrBeast is impressive. The digital entertainment expert and veteran entrepreneur has founded several successful ventures, including Tubefilter, Supergravity Pictures, and the Streamy Awards, and has served as CEO at Above Average and React Media.
“If you can make consistent content that an audience wants to see, the tools are there for you to do the rest of the business stuff. And then you can hire some idiots like me to help you out.”
Hustvedt is underselling the professionalism required in making content that aims to reach more than a billion people every month. MrBeast even has an HR department to professionalize recruiting.
“When we hire people, you can hire somebody who has helped develop shows and talent or worked with the major brands on a big campaign,” he said. “We put in an applicant tracking system. We now have a core set of values that we think make a strong and successful employee.
“That’s why we announced the partnership with East Carolina University. We needed a better pipeline of talent because this stuff is not trained in schools. We’re developing a curriculum with ECU, and hopefully we will bring a lot more people into the industry.”
From a personal career standpoint, Hustvedt says he doesn’t want to work in an industry “where everybody knows everything, and it’s just super traditional.
He added, “If it feels like network TV at a certain point, and you’re just defending the old way of doing it, then I’m probably in the wrong place. We’re going to keep playing around.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
April 9, 2024
Posted
April 9, 2024
How AI Will Advance Personalized Video Content
TL;DR
Viaccess Orca CTO Alain Nochimowski shares his perspective on how generative AI is changing video personalization and offers advice for working with GenAI.
He is bullish on generative AI, but does not shy away from discussing some of the growing pains and challenges of leveraging the technology. He encourages everyone to get hands-on with GenAI tools and learn how to utilize ones that are specific to their needs.
NAB Senior Vice President of Emerging Technology John Clark sat down virtually with Viaccess Orca Chief Technology Officer Alain Nochimowski to discuss how generative AI is likely to affect content personalization. Watch their full conversation (above) or read on for some of the highlights.
First, Nochimowski says, generative AI “will definitely impact UGC.” In the near future, he predicts, much of what we today think of us as user-generated content “will be automated and created through LLMs.”
Additionally, “premium content” creation processes will be aided by generative AI and should “definitely have a big impact.”
“You’re going to witness,” Nochimowski says, a “change of paradigm in terms of how video gets personalized.”
Currently, he explains, “You segment your audience and then you basically send the right video or what video advertising functions to the right audience.” But with generative AI tools, we will be able to “go one step further. We’ll be able to craft the message specifically and possibly, even sometimes online, you know, to this specific segment that you’re going to address.”
Nochimowski says, “The other opportunity, I think, relates to the knowledge that broadcaster or video service provider will get from its own audience.” He explains that companies will be able to “extract even more insights” from viewership data. It will help to foster “a new level of engagement with the audience.”
Despite all of this, Nochimowski admits, “The challenge is manifold” in working with generative AI. He emphasizes the importance of crafting guardrails, noting that the concept of brand safety and brand identity. He says it’s important that the guardrails evolve “in sync” with the technology.
GenAI has created “a lower entry into technology” for many individuals and Nochimowski suggests that many start by experimenting with these tools. After all, the technology is progressing incredibly quickly, but it still takes human effort and back-and-forth to get the best product from an LLM or other tool.
VR Production Isn’t an Experiment… It’s a Business
TL;DR
VR Production Workshop instructor Nick Harauz will guide attendees through a two-day on- and off-site workshop during the upcoming NAB Show.
A longtime educator in the post-production space and director of marketing for Boris FX’s Continuum products, Harauz will walk workshop attendees through some of the most cutting-edge tools and techniques in the VR and 360 space.
With Apple’s Vision Pro headsets, new iPhones with the ability to record to record spatial-based video, and newer cameras from Insta360 (including the X3, which will be available at the workshop), Harauz says this unique type of filmmaking is becoming more of a business and less of an experiment.
VR Productions Workshop Day 1 on Friday, April 12 will be held off-site but meets at the Las Vegas Convention Center in Room S226 at 9:00 AM sharp. VR Productions Workshop Day 2 will be held on-site in Room S226 from 9:00 AM – 5:00 PM on Saturday, April 13.
VR Production Workshop instructor Nick Harauz will guide attendees through a two-day on- and off-site workshop during the upcoming NAB Show where attendees will gain hands-on training in both the production and post aspects of this rapidly growing field.
A longtime educator in the post-production space and director of marketing for Boris FX’s Continuum products, Harauz will walk people through some of the most cutting-edge tools and techniques in the VR and 360 space, which includes capturing 360 imagery for VFX work in projects designed to be shown flat.
VR Productions Workshop Day 1 takes place 9:00 AM – 5:00 PM on Friday, April 12; it will be held off-site but meets at the Las Vegas Convention Center in Room S226. VR Productions Workshop Day 2 will be held on-site in Room S226 from 9:00 AM – 5:00 PM on Saturday, April 13.
In a video interview (above), he recounts his own, and the industry’s, evolution since the watershed moment when GoPro introduced Odyssey, a 16-camera 360 rig the sports camera company developed with Google.
“That was one of the first workshops that we held,” he recalls. “and I’ve seen that industry shift from that time, which people originally referred to as the ‘Wild West’ to where VR, AR and 360 productions are today.”
He points out that with Apple’s much-discussed Vision Pro headsets, new iPhones with the ability to record to record spatial-based video (a form of 360 video), and newer cameras from Insta360 (including the X3, which will be available at the workshop), this unique type of filmmaking is becoming more of a business and less of an experiment.
Likewise, the growth of post tools to stitch, manipulate and edit 360 material has also contributed to this evolution, and he plans to provide a solid lesson to attendees in many such features in NLEs including Adobe Premiere Pro, Blackmagic Design Fusion (within Resolve) and Apple’s Final Cut Pro and Insta360 Studio, now in its V4.6.0 release, with its own proprietary iPhone app for their content.
“We’ll also look at some higher-end workflows,” he promises, such as using MochaVR from BorisFX within Adobe After Effects to perform roto type effects within 360 content and apps such as 3dvista’s to create virtual tours and prep 360 footage for creating interactive experiences.
Harauz talks about apps such as 3dvista to create virtual tours and 360 content and actually mark certain portions of your footage to allow for interactive video. They make 3DVIsta Stitcher 4, Virtual Tour Pro and hosting services optimized for this kind of content.
The use of 360 video for applications such as headset-based real estate tours and many other types of production, he says, “is becoming more and more prevalent.”
No longer the “Wild West,” this type of work has become much more mainstream, Harauz says, and as more people access newer and cooler headsets, the entire field promises to further expand significantly in the coming year.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
When Vision Pro was announced, brands began to experiment with visionOS-native apps. It’s early days, but none appear to revolutionize immersive content.
April 8, 2024
Shira Lazar: “What’s Trending” at This Year’s NAB Show? Content Creation and Monetization
WATCH EMILY REIGART’S INTERVIEW WITH SHIRA
Shira Lazar, founder/CEO of What’s Trending, will serve as the first-ever host for NAB Show’s Main Stage. Register online now to attend NAB show, scheduled for April 14-17, using code AMP05.
“I’ve always loved NAB Show,” Lazar says, “because it has this mix of the technologists, the innovations, the disruptors, and then entertainment and media.” NAB Show represents her “sweet spot” and enables Lazar to “learn and educate myself while also connecting with people that I’ve known for years.”
So what’s on trend for the Creator Economy? NAB Amplify asked, and Lazar answered. Watch our full conversation, above, or read on for some of the highlights.
Short form or long form content? These days, the answer, according to Lazar, is “both” — and it depends. TikTok, Instagram Reels and YouTube Shorts are all “huge,” she says.
Creators will “need to be playing around with both” since nearly “every platform supports both.” However, the money is with YouTube, and “on YouTube, longer form content is what wins in terms of monetization.”
In terms of which social platform she thinks will break through next, Lazar has noted an uptick in LinkedIn content creation. “There’s literally influencer marketing companies that have launched just for LinkedIn content creators,” she says.
Lazar recommends that content creators do their best to be “very flexible and nimble with your content creation, your workflow and your processes, so that you can shift very quickly.”
Due to the number of formats, she says, it’s important to consider: “How do you take one piece of content and make it available to consume in different ways, depending on the platform?” Lazar thinks that this a good use case for AI tools, enabling solopreneurs to “create more efficient[ly], and for cost and time processes for all of us to exist in the right way on these platforms.”
Lazar says that the “democratization” that began with social media content distribution is “on steroids because of AI.”
Still, Lazar suspects that specialization will continue to be the name of the game for some creators. “When you’re building a brand, you do sometimes need to specialize at first, to then go general or to bring in other interests that you have.”
She says, “Depending on people’s interests, or where they get their audience, where they see traction, they will double down and hunker down on that, and they will be known as the TikTok star, the LinkedIn star, the YouTube star.”
However, she points out, “MrBeast now is going everywhere. He just signed a deal with Amazon Prime. In many ways, he will always be known as the YouTube star. You will have the 1% of creators, of course, go everywhere and make it work everywhere.”
Spatial Computing and the Future of Content
One future trend Lazar is willing to bet on? Spatial computing coming to the Creator Economy.
“I think we’re gonna see content creators rise from Apple Vision Pro, and whatever other tools or tech that we have, and content created specifically for” immersive devices. At a minimum, she predicts content will need to be “interoperable” or “able to exist on mobile, on desktop, on VR, AR, whatever.”
For content creators, that means they’ll have to consider, “How do you make your content translate to all those places?”
“I think it will be a tool, and there will be people that become Vision Pro masters, experts, stars, but it won’t be everyone” who migrates to this platform,” Lazar says.
She doubts that the XR glasses will be set up as a competitor to existing services like Twitch, but rather thinks “the smartest thing” will be for companies to consider how to integrate their offerings and leverage the new tools from Apple.
Working in the Creator Economy
“This idea of the digital content creator being a job, I love it,” says Lazar. “I love that it’s become more mainstream, that more people want to do it because that means we’re all in this together.”
She explains, “The reality is, it’s challenging. And this idea that you can just become this — blow up on this platform, and now it’s your full time job and make X amount of money — is just a fallacy.”
Instead, Lazar urges would-be professional creators to “lean into your passions, lean into your purpose, what do you care about? And start with that and start creating right now. But don’t leave your entire life behind.”
“Know that sometimes it’s going to be great, and sometimes it won’t be, so have different revenue streams and options for yourself,” Lazar advises. “Use content and what you love to build thought leadership and lead to” potential brand deals or other partnership opportunities.
It’s important to “be practical, be reasonable, and then take care of yourself and your mental health.”
That solidarity and focus on mental health is meaningful to Lazar, who also founded Peace Inside Live, which she describes as a “wellness agency and collective,” in 2020.
“You can’t necessarily create when you’re in a bad mental state,” Lazar explains.
She also disputes the idea that the only successful creators are making their content full-time. “That is not true. We all create in different ways, and so I think we need to be looking at our lives as individuals in terms of what we each want, what I want is different than possibly what you want.”
Casey Neistat, digital creator and cofounder of BeMe, is joining NAB Show this year for “Do What You Can’t with Casey Neistat.” You can register to attend for free with […]
April 7, 2024
Posted
April 7, 2024
Omdia Research: Factors That Will Further the Adoption of FAST
TL;DR
All major studios, Netflix and Amazon included, are moving ahead “more forcefully” with ad-centric strategies this year, according to new research from Omdia.
FAST services offer many opportunities to increase revenue and viewer engagement but ad revenues are small in comparison to the rest of the advertising market.
The appetite for FAST is growing among consumers more quickly in countries outside the US, but the US will still dominate FAST as far out as 2029.
The Media & Entertainment market will top $1 trillion in 2024, with advertising making up the majority of revenues with pay-to-free models stimulating advertising growth in 2024.
With online video taking the lion’s share ($367 billion) of the $1 trillion global total, the industry shouldn’t neglect traditional TV. Its total revenue is not far behind on $345 billion. Games comprises $255 billion of the total and cinema comes in at $41 billion.
Contrary to popular belief, pay-TV has not declined massively. “It is more the case that online has grown considerably. In other words, pay-TV is here to stay for foreseeable future but moving forward, online is where the growth is.”
What is notable is that advertising makes up the bulk of all revenues: 62% of the online video total and 43% for traditional TV. Even a third of revenue from games this year will come from ads.
What’s more: online video advertising will continue to grow at 17% CAGR as it has done since 2018 to become the top source of revenue in 2028.
SVOD subs are growing but growth has slowed. The number of SVOD services is now set, with no new major streaming services expected. “Consolidation is the way forward,” she says.
Video service stacking is still on the rise mainly due to free services. Pay-to-Free will stimulate advertising growth in 2024.
All major studios, Netflix and Amazon included, are moving ahead “more forcefully” with ad-centric strategies this year.
Of the $377 billion total video advertising pie predicted by Omdia for 2024, $8 billion will come from Free Ad Supported TV (FAST) channels. It’s a lot — but pales in comparison to the $32 billion from premium AVOD, the $131 billion from social media and the $141 billion from linear TV advertising.
FAST in fact represents just 7% of the total premium TV and video ad revenue
“We don’t think the industry should be so obsessed with FAST to the point that it obscures all else. Other ad revenue streams are equally if not more important,” says Rua Aguete. “Do I believe in FAST? — Yes, with the context of how much value there is from ads across all media and in context of each market.”
As a result, Omdia forecasts a future of hybrid streaming models with FAST in the mix.
Total revenues from FAST will continue to grow to $12 billion by 2028 dominated by the US, and approach $13 billion by 2029 by which time Western Europe’s share will only have topped $1 billion and LATAM $400 billion.
The US will dominate the FAST market in 2028, as it does now, followed by the UK, Brazil, Canada and Australia.
“The appetite for FAST is growing among consumers more quickly in countries like the UK (where viewership has grown seven times in the last three years, or, put another way, 21% of people use FAST on weekly basis) compared to the US, which has grown less than two times in the same period to 46% of people using FAST on a regular basis.”
The popularity of FAST among Americans is explained by the country having no real history of free-to-air broadcast.
Another key point that Omdia will draw is the dominance of hardware players in the FAST landscape. Pluto is the global leader with the likes of Vizio, Roku, Samsung, Tivo, Hisense and Xumo also active. That the hardware players are very interested in this ad world is simple math and business sense.
Rua Aguete shares data that total TV hardware revenue in 2022 was $21 million in the United States and that CTV ad revenue in the same territory is comparable at $20 million. Yet profit margins are much higher on advertising revenue than on hardware sales.
“If hardware players can make a profit of more than 50% on advertising versus less than 1% on actual hardware product their interest in advertising is clear. The TV hardware business is struggling but the platform business is growing.”
For example, Vizio’s smart TV platform revenue was $533 million (3Q22–2Q23) with profits of $322 million (3Q22–2Q23) at a 60% margin. Roku’s platform revenue grew to $2,777 million (3Q22–2Q23) and its platform profits to $1,510 million at a 54% margin.
Rua Aguete will also point out that Amazon’s ad-supported Prime Video will spearhead a new wave of shoppable TV in 2024. Ad-supported Amazon Prime could make over $2 billion in ad revenue in 2024 and could leapfrog Netflix, Paramount+, and Peacock to become the second-largest hybrid OTT video service by ad revenue this year.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Available now to download, “A Beginner’s Guide to FAST” will be presented at NAB Show by GRG Global’s VP of Media Research, Gavin Bridge.
April 7, 2024
What’s Advancing the Adoption of Virtual Production?
TL;DR
Andy Jarosz, virtual production supervisor at Smash Virtual Studios, compares VP to digital cinematography, which took a decade to fully become the norm.
As the technology continues to mature, “more and more productions can see themselves utilizing it,” Jarosz says, highlighting new tools such as ARRI’s Virtual Twins, which allows ARRI lights to be incorporated into Unreal Engine.
“If you’re going to be shooting in someone’s living room,” he says, it makes much more sense “to just go film in a living room. But if you want to shoot on Mars, it’s way easier to film on the LED volume.”
Virtual production has come a long way just this past year and the NAB Show 2024 attendees want more information about ways to do actual work in this exciting arena. Andy Jarosz, virtual production supervisor at Smash Virtual Studios, will be leading three sessions at the Post|Production World training conference designed to enhance attendees’ understanding about this multifaceted topic.
In a video teasing his presentations, Jarosz notes that Smash has become the largest LED studio in the Chicago area since its 2022 founding and that it’s providing services for a large number of projects, primarily TV shows.
The 18,000-square-foot building near downtown offers a variety of LED screen configurations and Unreal Engine playback services for high-end commercials, TV, and film, including Dark Matter for Apple TV+, Chicago PD (NBC) and a number of feature films. In his capacity at the company, he has become expert in the technology and workflow of virtual production and his sessions will impart much of this acquired knowledge.
People in the film industry “don’t always like new things,” he offers, observing that it takes a long time for new technologies to become adopted. Likening virtual production to digital cinematography, he says that novel technologies are frequently met with disbelief. “People just assumed it was a gimmick, “he says of the early examples of digital cinematography. “It took a decade to fully become the norm.
“As the technology matures” Jarosz continues, “more and more productions can see themselves utilizing it.” He says that the industry is arriving at a point now where cinematographers “understand that there are certain considerations when shooting on an LED volume — that you have to treat it in a specific way. But those considerations are not the end of the world. That those concessions are just technique and just process.”
He points to new tools, such as ARRI’s Virtual Twins of their lights that can be incorporated into Unreal Engine, “so now cinematographers are able to say, ‘I want a Sky Panel and I want it set to these settings inside the virtual space.’ And they’re going to get exactly what they expect. The tools are catching up.”
Further, he adds, “We’re going to be talking about things like exposure. We’re going to be talking about color science. We’re going to be diving into specifics about Unreal Engine because a lot of times, people just don’t have a good understanding as to what Unreal can do and what it can’t do.”
He’s also going to provide a whole section just on car process shots, which have been among the quickest type of work to be embraced for feature film and TV applications. “We’re going to be getting fairly technical into the process of virtual production and shooting against LED walls as a backdrop. “
The core of this approach starts with creating elements within Unreal Engine that can interact with camera attributes such as movement and optics. “There are specific nuances that you need to consider when you are working on an environment for virtual production
specifically,” he cautions. “Unreal Engine is a gargantuan piece of software. It’s designed for massive teams of game developers to use. It’s not meant to make movies. And so, there are certain caveats to it — things that do and don’t work on an LED volume.”
His experience suggests it’s easier for people from the film industry to pick up Unreal Engine than the other way around. “The games industry works in a completely different way,” he notes. “They have completely different requirements. And they work to different standards. Games have all different kinds of art styles. They can be cartoony. They can be realistic. They can be anything in between. Film has one art style. It has reality. And anything beside that is just not acceptable.”
“Often, what we’re finding as a studio is that we’ll get environments and levels designed from outside companies that aren’t used to creating content for this specific use,” he explains. “Then we need to go in and redo a bunch of stuff, redo a bunch of settings. Or they’re just not constructed in a way that’s conducive to filming. This class is about communicating those requirements that we have as a stage and going through all of those little nuances and making sure that the levels that people are designing are up to scratch when it comes to a more cinematic workflow.”
Aimed squarely at producers, directors, UPMs, agency producers and people in similar capacities, this session is an overview that breaks down where this powerful technology actually benefits a production and when it’s just unnecessary.
These are the types of questions Jarosz fields as part of his job. “If you’re going to be shooting in someone’s living room,” he says, it makes much more sense “to just go film in a living room. But if you want to shoot on Mars, it’s way easier to film on the LED volume.” Between those two [extremes] is the diverging place at which utilizing the services of a company like Smash does or does not make financial sense.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Jarosz will provide some success stories exemplifying how the use of a virtual production environment (which obviously involves some relatively significant expenditure) has proven cost effective for specific projects, such as saving the cost of company moves when a show needs to shoot six locations in a day or making the most of a situation where a celebrity can only offer an hour or two to a production.
“These are all niche use cases that virtual production can solve and we’re going to dive into all of them,” he says in anticipation of his exciting presentations. “We’re also going to go over specific budget comparisons: shooting scenes practically versus on an LED volume to show cost breakdowns as to what people should expect when they book an LED stage.”
Anyone interested in getting into virtual production or understanding more about the nuts and bolts of the process, should consider adding Jarosz’s sessions to their NAB Show calendar.
Streams and Screens: IMAX Is Delivering Something New
TL;DR
IMAX to debut a groundbreaking 15-perf/65mm film camera at the 2024 NAB Show, marking a significant innovation in large-format cinematography. The new camera stands out even as digital cinematography dominates the industry conversation.
IMAX’s distinctively taller aspect ratio — 1.90:1 or 1.43:1, depending on the venue — isn’t about cropping and blowing up a standard 2.39:1 anamorphic or 1.85:1 flat image.
In partnership with Disney and other content platforms, IMAX Enhanced Home format and streaming technology aims to bring the immersive IMAX experience into viewers’ homes.
Watch: SHOOTING “OPPenheimer” IN IMAX
IMAX Corporation, whose large-format cameras rely on motion picture film, has been seen as a significant driver of Best Picture winner Oppenheimer’s theatrical success, and are even unveiling a long anticipated new 15-perf/65mm IMAX-format film camera at the NAB Show.
Paul Masi, Academy Award-winning sound mixer, will speak about how he optimizes his mixes for IMAX’s proprietary sound format. Evan Jacobs, head of finishing for Marvel Studios, will discuss the multi-phased process of creating IMAX deliverables for their films. Large format still photographer Tyler Shields is set to speak about his preferences for large format film and its advantages over other photographic media. Vanessa Bendetti, head of motion picture & entertainment for Eastman Kodak will be on hand offering her perspective on the importance of IMAX to the motion picture film business. IMAX’s Greg Ciaccio, VP of post production for original content & image capture, will moderate.
While digital cinematography cameras from heavy hitters such as ARRI, Sony and RED (recently acquired by Nikon) will certainly be at the forefront of many conversations at NAB Show, IMAX’s much-anticipated new 15/65mm film camera will also attract a lot of attention.
“We’re planning to have a camera there,” Markoe says. “And we’re getting very close to having the prototype out in the hands of DPs very shortly. So we thought it was a good opportunity to give a little more info and about what the new cameras are going to be.”
Along with the camera, the company will also be unveiling some new digital tools specifically built for finishing IMAX movies.
“People may not realize that basically every movie you see in an IMAX theater is being completely remastered for IMAX,” Markoe notes. “We’re not just using the DCP that’s showing in regular theaters.” Whether shot in 15/65 or digitally, films released for IMAX theaters go through unique post processes, up to and including the film-out stage for exhibition.
IMAX’s distinctively taller aspect ratio — 1.90:1 or 1.43:1, depending on the venue — isn’t about cropping and blowing up a standard 2.39:1 anamorphic or 1.85:1 flat image.
“We are applying proprietary technology to enhance that movie, to look as good as it can on our bigger screens that are brighter and have higher contrast. So, we are making a unique master, both for picture and for sound, because our sound format is not Dolby,” he explains.
“It’s our own proprietary sound format. Done with the filmmakers in complete control of picture and sound. When it’s shot at the [taller] aspect ratio, that’s what you see. So you are really seeing a unique version of the movie in our theaters.”
Markoe, who has an extensive background in post, observes that the increase in resolution between traditional and IMAX can mean that aspects of the image that look great in a standard film or digital presentation might become obvious problems in a large format screening.
“Often,” he says, colorists or VFX artists will add digital film grain in post, “which on most theater screens will look good, but then they see it in IMAX, and it’s too much! We are able with the DMR [proprietary digital remastering tool] to feather that amount back to the director’s taste, but we also do that both for our laser and our xenon projector theaters separately to make sure it looks exactly the way they expect it to look in all of our theaters.”
Markoe also touts IMAX’s 5.0 and 12.0 sound system as providing a unique audio experience. “It is sub-bass managed so it doesn’t use a discrete subwoofer channel,” meaning that there is no audio channel specifically dedicated to the lowest frequencies as there is in “.1” systems, but the entire spectrum of frequencies is contained in all channels and then everything below the cutoff frequency of 70Hz is sent by the playback system to the subwoofers.
“We actually have more powerful subwoofers than any Dolby Theater out there. We have bigger subwoofers and more of them with more amplification and the whole system allows for a kind of low. The low end in our art theaters can perform in a way that other theaters can’t, which is another thing that Chris Nolan loves about [IMAX]. “
While IMAX has been in the news recently because feature films released in the format — such as Oppenheimer, shot in 15.65, and Dune, shot digitally but remastered and released for IMAX theaters — they are still committed to overseeing the kind of documentary specialty film they built their reputation on.
“We have documentaries, original documentaries, film for IMAX program,” Markoe explains. “Our Blue Angels documentary is coming out this year. We went through and tested eight different cameras to make sure that the camera in the cockpit looks best in IMAX.”
Cacciao adds, “Documentaries are something that we’ve made from the very beginning. There were a few years recently where we made very few. We’re now making more documentaries and we’ve started producing more content and acquiring and distributing a lot, too.”
While IMAX’s primary claim to fame is based on super-sized theatrical experiences, the company has also been developing digital tools for enhanced home viewing. They will have a presence at the show where people can learn about developments with IMAX Enhanced Home format.
In addition to its focus on large neg and prints, IMAX will also be at the show with developments for home entertainment viewing, including IMAX Enhanced Content for optimized 4K HDR presentation at home, and its IMAX Streaming technology designed to optimize streaming of audio and video used by Disney and other content delivery companies.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Studios are increasingly mining videogames for characters and stories to bring to life in TV shows and films, particularly as audiences grow tired of story lines based on comic books. A key part of success is appealing to die-hard fans and new audiences without betraying the original game.
An article by Sarah Krouse in the Wall Street Journal explores this trend with comments from several executives in gaming and film. Gamers are a highly engaged bunch, Helene Juguet, managing director of Ubisoft Film & Television, tells the WSJ. “If they don’t like something, they will tell you.”
Hollywood is commissioning more video game adaptations, and the reasons are clear: Movies based on video game adaptations and released broadly in theaters grossed $712.2 million at the domestic box office last year, more than double what they brought in the prior year, according to ComScore. Superhero film adaptations, meanwhile, brought in about $1 billion domestically, down 42% from the prior year.
Video games offer fresh characters and new worlds that can appeal to young children and their parents — like Sonic and Mario’s adventures — or teenagers and adults seeking mature story lines, such as those in HBO’s hit series, The Last of Us.
In addition, a generation of gamers are now in creative positions across Hollywood. Yet failure to get the translation to screen right does no one any favors. Paramount’s first stab at bringing Sonic the Hedgehog to the screen faced a backlash over the trailer, prompting the studio to quickly conduct focus groups and hire a new animator to alter the character’s appearance so that it appealed to die-hard fans.
“Every design now is vetted within an inch of its life,” Marc Weinstock, president of worldwide marketing and distribution at Paramount, tells the WSJ.
It’s why game developers and fans are often now deeply involved in adaptations, ensuring that the end product honors the source material.
But it’s not exactly a two-way street. Film and TV producers want games more than game developers want Hollywood. While few films make $1 billion, hit video games can generate several billions of dollars in sales over their lifetimes, which means that for some game makers, film and TV adaptations are more trouble than they are worth.
“In failure, we run the risk of compromising the underlying intellectual property. So, it’s a high bar,” said Strauss Zelnick, head of Grand Theft Auto developer Take-Two Interactive Software.
The success of HBO’s “The Last of Us” has streamers and studios alike jumping into the video game adaptation game.
April 8, 2024
Posted
April 7, 2024
International Content Hits the Inflection Point
TL;DR
Ampere Analysis founder Guy Bisson says the global TV and streaming industry is shifting away from Hollywood toward a global rise in demand for non-English content.
The shift, says Bisson, is part of a wider and more fundamental story about changes in the content business over the past 18-24 months in which the studios refocused on streaming.
This change in direction is the result of a general slowdown in growth opportunities and cost pressures, he says, that were partly driven by investors who increasingly turned against the studio direct model.
The major streamers now produce more than half of their content internationally, an inflection point from traditional US-centric production to a more diversified and growing global approach.
In an essential Town Hall presentation at the 2024 NAB Show, Guy Bisson, executive director and co-founder of Ampere Analysis, will illustrate how the global TV and streaming industry is shifting away from Hollywood toward a global rise in demand for non-English content.
The shift, says Bisson, needs to be seen in context as part of a wider and more fundamental story about changes in the content business over the past 18-24 months in which the studios refocused on streaming.
That change in direction is the result of a general slowdown in growth opportunities and cost pressures that were partly driven by investors who increasingly turned against the studio direct model.
In addition, wealthy western markets were saturated for streamers. Simply put there were no new customers to find. One way of finding new customers is to look at demographic expansion. Streamers have traditionally skewed to a younger audience but there is still headroom among an old audience.
The bigger opportunity lies in international expansion which is why streamers have trained their focus on markets with room to grow.
“Clearly the best way to target them is to provide content that will engage the audience in those regions and you do that by making content that appeals. That has been the big driver for the internationalization of content.”
The fact that it is expensive to produce content in the US as well as other countries like the UK is another driver.
“These three interrelated factors have all led to the internationalization of content. What it means is that the US is no longer the be all and end all of production, an issue highlighted during the 2023 strikes,” says Bisson.
“The changes in the business models that the streamers had planted created the environment in which those strikes took place.”
The Global Growth Opportunity
The Ampere boss is very clear that the US market remains “orders of magnitude bigger than anyone else in value terms,” but equally that the growth opportunity lies elsewhere.
There is still growth to be had in Western Europe markets, he says, including the UK, also Italy, France, Germany and Spain. Eastern Europe is showing growth too.
In Asia, South Korea has led the way with K-dramas like Squid Game; Japan is not far behind, being particularly strong in feature films (like Oscar winner Drive My Car).
In terms of where we’re going next, Bisson points to sub-Saharan Africa (in countries like Nigeria) and the Middle East.
He draws a parallel between trends now and that of multichannel growth when digital TV was introduced into the cable market.
“Over time thematic channels that were entirely programmed with US content evolved to become more and more local in terms of content, and continuity and presenters. We are seeing the same thing.”
Streamers are funding content that has to work locally and for global distribution. Local content means local language, local actors and locations but with a production value and certain aspect of story and narrative that is global in appeal.
“Drama is generally used as a spearhead just as it was in the early days of streaming but beginning before the pandemic streamers have moved heavily into unscripted to the point today where over half of first run of original commissions — across the board from Netflix, Amazon, Disney, Warner Bros. Discovery, Apple and more — are now unscripted.”
Check the Numbers
Full year comparison of the number of scripted shows produced in the US for 2023 against the average of the previous two years shows a 38% decline. Netflix and Amazon now generate more than half of their content outside the US, underscoring a decreasing reliance on American production ecosystems. This global pivot is not merely a shift in geography but may signal a broadening of narrative scopes and audience engagement strategies.
“The majority of content on their platforms are scripted today but the majority of first run commissions going forward are unscripted with formatted content favored because of its adaptability to international markets.”
Ampere forecasts global investment change to grow upwards by 30% across 2023-2028. Central and South America will see increased investment, along with Asia and MENA & SSA, whereas a leveling off and slight decline is expected in Europe and the US.
Markets like India have proven hard for international streamers to crack despite its attractive scale, in part because of the relative low value per customer but premium advertising tiers may help grow subscriptions.
“Ads serve a dual function in both the US and international markets which is churn protection, giving customers who would otherwise leave an opportunity to downgrade and stay onboard,” says Bisson. “Ads also bring in a new market opportunity around people who might not have subscribed at all due to cost pressure.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Post-Peak TV streamers will lean more heavily on international content, take fewer risks, and return to network TV format and scheduling.
April 6, 2024
How AI Can Improve Production Workflows (But Not Genuine Creativity)
TL;DR
Strada co-founder and CEO Michael Cioni highlights the transformative potential of “utility AI” in automating mundane production tasks rather than replacing creative processes in the film and television industry.
Launched by the Cioni brothers, Strada focuses on leveraging AI to streamline production workflows, such as sound syncing and file organization, rather than automating creative tasks.
Strada aims to democratize access to AI tools for the media industry, offering an open platform that contrasts with proprietary models, and emphasizes ethical standards in AI usage.
Strada’s platform is designed to learn from users’ media over time, enhancing efficiency and offering personalized solutions, setting it apart from generic AI applications.
Driving back from NAB Show last year, Michael Cioni and his brother Peter Cioni made their decision to quit their respective jobs at Adobe and Netflix. “I didn’t see any AI at NAB — and that was an opportunity,” he said.
Unlike many in the industry, Cioni approached artificial intelligence from a different and, he believes, more pragmatic and valuable angle.
“When something hits any industry like a whirlwind, it’s a sign to pay attention because it may mean more than it seems,” he says. “The generative AI space may seem disruptive right now, but I believe that automating mundane tasks will likely prove most valuable in the long run. While generating content grabs headlines, it won’t necessarily revolutionize industries.”
By June 2023, the brothers had launched Strada, a new start-up focused on smoothing production workflow with AI tools. Named after the Italian for “street,” the company targets workflow tasks not creative tasks that AI will automate and accelerate.
“We are all getting overhyped with the generative side of AI,” the co-founder and CEO insists. “If you really think about it, of all the things people thought AI would transition or annihilate, like truck drivers, airline pilots, and the shipping industry, it turns out the first beachfront is the arts and entertainment. Nobody saw that coming. This disruption caught us off guard, leading to the current hype.”
While transport and logistics industries have long prepared for this, Hollywood was caught off guard. In that context, entrepreneur Tyler Perry’s decision to pull out of an $800 million project to build new soundstages — citing AI — seems hasty.
“I don’t think AI is going to effect our industry as much as that decision making implies,” Cioni says. “It is mundane tasks in manufacturing, not creative processes like writing or editing, will likely undergo automation. The backroom tasks such as organizing, archiving, or shipping are at risk of elimination by AI.”
He argues, “Editing’s backroom tasks, like transferring files or syncing audio, will vanish. This may lead to job losses, but it will enhance creative work by eliminating delays. Strada, our platform, automates tasks by employing AI engines. It offers features like automatic sound syncing, image detection, and translation for global content.”
While Strada is aimed at the tasks that can be automated to speed the creative process, its model still relies on the generations old production methodology of raw camera records into post for edit and grade.
“AI will never replace genuine human creativity,” Cioni insists. “Humans value authenticity over synthetic creations. AI-generated content lacks the ephemeral nature of real experiences, making it less treasured. Synthetic AI is a valuable tool, but it won’t replace human creativity.”
The economics of the existing production model may change but Cioni believes there will always be a fundamental need and therefore value for handcrafted filmed stories. Just as today we preface some stories with “based on true events,” we may see films in future presented as “crafted by filmmakers” to distinguish traditional, professionally lit, acted and camera recorded live action from generative AI, which, he says, “will always be synthetic.”
After all, celluloid is still prized by many filmmakers over digital and vinyl has made a resurgence in the age of Apple and YouTube Music.
“AI may change market dynamics, but there will always be a place for authentic, crafted experiences.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Co-Opting AI to Streamline Costs and Supercharge Creativity
TL;DR
Pinar Seyhan Demirdag, co-founder of creative AI production company Seyhan Lee, is one of the leading figures at the forefront of using AI as filmmaking tool.
Seyhan Lee has developed and released Cuebric, which uses generative AI to generate photoreal environments in an instant and far more affordably than current methods.
Seyhan Demirdag is adamant that AI tools like hers do not spell the end of physical soundstages or deep human involvement in the craft.
The excitement surrounding AI tends to mask the practical benefits that tools using it can have on film and TV production today. A session at NAB Show, “Ask Me Anything: AI Post Production Workflow Experts Tell All,” intends to separate fantasy from reality.
Among the speakers is Pinar Seyhan Demirdag, co-founder of creative AI production company Seyhan Lee, and one of the leading figures at the forefront of using AI as filmmaking tool. Seyhan Demirdag will be joined by moderator, Michael Cioni, CEO and co-founder of Strada; Austin Case, Strada’s director of engineering; director Paul Trillo at Trillo Films; and Colourlab AI CEO Dado Valentic. The session will be held on at 10:30 AM on Sunday, April 14 in the Create Zone Theater (SU4087).
“I would like to show a practicable application of GenAI which results in the production of meaningful and emotional storytelling and that can reduce budgets,” Demirdag asserts.
Seyhan Demirdag will attempt to dispel the mystique and even fear that the industry has around the new technology. “I spend a great deal of time contemplating the power of humanity in the age of AI and for me, AI is a tool that benefits human workflows. That ethos is very much at the core of what we do.”
She continues, “AI did not invent itself. Humanity collectively decided that the time has come for us to adapt our workflows and the way we make meaning of life around us. The time has come for us to expand our capacity. Since AI works with parallel processing it processes information differently to linear computers. So, it has also come time for us to adapt our workflows, our creative thinking and creative execution with the same, multi-dimensional and multifaceted thought process.
“Inventions are parallel to the needs of their time. There’s nothing to fear about something whose time has come.”
Seyhan Lee has developed and released Cuebric, which uses generative AI to generate photoreal environments in an instant and far more affordably than current methods.
“AI is open source, so there are AI models on top of AI models that everybody is releasing but there are very few solutions that take the open-source research and make it cinematically appropriate or packaged to be usable by the end consumer. To my knowledge, there are even fewer tools that are geared toward professional use,” Seyhan Demirdag explains.
“There’s definitely a problem in the film industry which is the astronomical cost of environment building,” she continues. “When an actor changes direction in a Volume the virtual background needs to render and move correctly with them. The background cannot have a life of its own where the actor moves. But in order to do that current virtual production workflows essentially require the making of a game. That’s why the cost of creating background equates to the cost of building a game.”
And yet, Seyhan Demirdag says, if you analyze the background environments that are actually used in production up to 75 percent of them are not required or scenes can simply benefit from a simpler background.
“If productions were to adopt a solution like Cuebric they would not only have more options to use but they could also save the astronomical costs of virtual backgrounds.”
Seyhan Demirdag says production designers can use a tool like Cuebric to quickly upload sketches without needing to know how to use code or complicated tools.
“They can make a sketch, upload it, and visualize what they would like to build later on in a matter of minutes. The whole production can instantly understand the vision.”
Maybe the director can shoot a scene in Alaska that previously didn’t fit the budget. Or a writer on episodic TV needing to meet repeated tight deadlines could use the tool to quickly ideate and present what they have in mind.
She says several short films are using Cuebric , along with a promo for a well-known musician and several A-list studios.
Director Quinn H. and his team recently produced a pair of high production value shots — looking like Paramount’s Yellowstone or a scene from Dune for a fraction of the cost by using the background solution of the software.
“I would dare to say that Cuebric makes the world’s most collaborative art form even more collaborative,” she contends.
Seyhan Demirdag is adamant that AI tools like hers do not spell the end of physical soundstages or deep human involvement in the craft.
“Just as I love to read tangible, physical books I believe that the advent of AI will always give us options,” she says. “There will be a possibility for people not to use any actors if they want. It will be a choice. But I know that I will always be watching movies that speak to my heartstrings. That’s why there will always be a world where an actual camera will track an actual actor and the background will be tracked as a relationship.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Watch “Creator Economy Amplified: AI Tools for Creators.”
TL;DR
From automating mundane tasks to amplifying creative innovation, artificial intelligence is transforming the creator economy. Jim Louderback, editor & publisher of “Inside the Creator Economy,” joins veteran journalist Robin Raskin,The Prismatic Company’s Abe Feinberg, and Rebecca Xu from Opus Clip to unpack the profound influence of AI on content creation.
AI, says Raskin, will be seamlessly incorporated into “pretty much every part” of the lives of AI natives. AI tools are not replacing creators, Louderback says, but serving as co-pilots, enhancing their ability to produce content more efficiently and creatively.
Xu describes how tools like Opus Clip that simplify the video editing process, transforming long-form content into engaging, short videos with a single click, cater especially to the preferences of Gen Z audiences.
Feinberg explains how creator-designed platforms such as Prismatic will enable creators to design and produce content that’s modular, composable, and remixable, significantly reducing the time and effort required to update and adapt content across various formats and platforms.
AI’s integration into content creation tools is not just a glimpse into the future; it’s a present reality that’s enhancing the way creators produce, manage, and distribute content. From automating mundane tasks to amplifying creative innovation, artificial intelligence is transforming the creator economy, and as AI continues to evolve, its integration into the creative process heralds a new era of efficiency, personalization and engagement.
As part of NAB Amplify’s “Creator Economy Amplified” series, we sat down with industry veterans Jim Louderback, editor and publisher of Inside the Creator Economy, veteran journalist Robin Raskin, The Prismatic Company’s Abe Feinberg, and Rebecca Xu from Opus Clip to unpack the profound influence of AI on content creation.
These industry pros shared their insights into leveraging AI tools to revolutionize content creation, enhance efficiency, and foster connections with audiences. Whether you’re intrigued by the prospects of AI-driven content generation or seeking strategies to refine your creative workflow, this discussion promises a forward-looking perspective on integrating AI into your creative toolkit. Watch the full conversation in the video at the top of the page.
This chat offers a preview of the all-new Creator Lab at NAB Show, a dynamic space dedicated to exploring the newest trends and technologies driving the creator economy. Led by Louderback and Raskin, the Creator Lab will host an extensive lineup of discussions and interactive workshops, with industry experts including Feinberg and Xu offering valuable insights and practical skills to attendees.
The AI Revolution in Content Creation
Louderback uses a simple yet profound analogy to describe the impact of AI on creators: the Slinky helical spring toy. Pre-AI, he says, creators were like a closed Slinky, limited by what they could do by themselves. With AI, possibilities expand and suddenly there’s a lot more ground they can cover.
“The nice thing that happens is now with AI tools, you’re not replacing the creator, but there’s so much more that they can do, cover a lot more ground in that same amount of time, because AI really helps them be a better creator,” he says.
This visualization captures the essence of AI in content creation — expanding the potential of creators without supplanting the human touch that lies at the heart of creativity.
Using AI to generate creative content, such as scripts and even videos, results in stale, generic content that disappears in the blink of an eye, Feinberg agrees.
“The more interesting part,” he asks, “is really how do you build a system where that the AI is functioning as the supportive assistant to the creator, and the creator is still the person who’s leading that creative decision process, they are still the creative decision maker?”
As an example, Feinberg points to mobile editing apps. “Some of the motions are very tedious,” he explains. “You have to zoom in and, you know, make these little micro-edits. If an AI can be there as your assistant and be sort of learning the pattern and saying, ‘Hey, I think this might be the kind of edit you want to make, would you like to make it with a single tap,’ in the end, you’re doing exactly the same thing you would have done as the creator. But you’re turning out a lot more of the those videos edited videos in the same amount of time.”
Productivity, Xu argues, is where AI offers the most value to creators. “Ultimately, the best AI tools help creators enhance creativity and efficiency through helping them better manage their time,” she says.
“For instance, coming from the AI video editing world, there’s a lot of AI video editing tools to really help creators streamline the editing process by, say, automatically identifying and correcting errors, enhancing visuals or audio quality.”
Raskin, who comes from a background in magazine publishing during the print era, provides a broader perspective, suggesting that AI’s integration into content creation is part of a generational shift.
“Digital natives grew up in a video-first environment where video became like the lingua franca that everybody speaks and communicates with,” she notes, “and the new generation will be an AI generation.”
AI, says Raskin, will be seamlessly incorporated into “pretty much every part” of the lives of AI natives. “Whether it’s the car that you drive, or whether it’s the events that you produce or whatever that is, AI will be infused and the hardest challenge will be to use it wisely and creatively.”
Personalization and Audience Engagement
Another critical aspect of AI in content creation is understanding and engaging with audiences. By analyzing feedback and preferences, Louderback says, AI tools can help creators identify their most engaged fans and tailor content to meet their interests. “It helps you understand your audience better,” he explains.
“If you’re a creator, and you’ve got lots of people commenting and saying things, whether it’s on your videos, or other places, it can go in and summarize all of those comments and help you find those people that are your biggest fans and figure out what they want, and help you figure out how to give your audience and your community even more of what binds them to you.”
Historically, Feinberg notes, information has been presented in what he calls a one-size-fits-all approach. But AI is driving a shift to a more tailored content delivery.
“You know, you get a lecture for everyone, or everybody watches the same video in the same format, or everyone reads the same blog post,” he details. “But now, there’s a huge amount of potential for AI to help you take really the same core value that you’ve created, but present it in different ways — even on the fly — to an individual person to give them what they most want.”
Practical Applications and Future Prospects
Tools like Opus Clip, which uses to AI to automatically edit long-form videos into short, high-quality sharable bites, and content creation platforms like Prismatic aimed at helping creators design and build for scale, are the wave of the future.
“At its core, I believe AI is a tool that helps creators tell a better story.,” says Xu. “A lot of people, especially Gen Zs, have changed their habit of consuming content from TVs, or long videos or articles into short videos. So short videos are increasingly becoming a main medium for creators to tell their stories as well as for a user or audiences to consume content.”
Opus Clip, she says, was designed to help creators. “And we’re not only helping individual creators, but also a lot of businesses who want to break into a bigger market through short videos.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Creators, Feinberg recognizes, have a lot of value to share. “But it can be kind of locked up, because of the amount of time that it takes to transform your content.”
That’s why, he says, his team is building Prismatic, which uses AI to generate graphics, diagrams, videos, podcasts, blog posts and other content with customizable, reusable components.
“We’re working on a platform to help build a content in a way that’s modular, composable, remixable,” he explains. So you can more easily deploy things in different formats and different modalities. And then what we’re where we’re going with that is the idea that we know that creating high quality overtake or high quality content takes a lot of effort. And we want to help creators, not circumvent that effort, but get more out of the effort that they’re putting in.”
For creators just beginning to dip a toe into AI, Raskin recommends AI aggregator There’s an AI for That, a community of AI founders and users boasting what the developers call the largest database of AI tools and tasks.
“I would not go in there thinking you’ve got the answer to the world,” she cautions. “But it’s so interesting, because it lays [everything] out in categories, law, contracts, events, scheduling, and you can look up an AI-specific thing. And the deeper secret is, most of them are free for a trial. So you can experiment, decide whether it’s for you or not.”
Watch “Creator Economy Amplified: Building the Creative Stack.” TL;DR Navigating the creator economy is akin to exploring a vast digital ecosystem, where content is the currency and creativity knows no […]
April 4, 2024
Hitachi Content Software for File Enables Next-Generation Media Platform
With Hitachi Content Software for File, we were able to provide a cost effective, highly scalable ingest and playback environment that met both the performance needs as well as staying within the client’s budget.
Challenge: Major Studio Facing Scaling Issues with Massive Data Inflows
A top media studio with a massive amount of daily created video was overwhelming their existing storage platform. The legacy system couldn’t scale to meet the ever-growing demand for video ingest. Furthermore, they required an active workspace for large video files to have playback ability without losing frames. Expanding their legacy system was deemed too costly and impractical.
Solution: A Scalable, Active Data Environment Powered by Hitachi Content Software for File
Hitachi deployed modern platform for the entire creative workflow, including VFX, 4K and content playback. The solution collapsed the legacy platforms into a single modern filesystem based on Hitachi Content Software for File. Furthermore, the new platform is futureproof, and will be able to support faster adoption of new creative mediums and capabilities and support next generation content types including 8K & VR. The simplified environment allows the customer further to migrate from legacy systems in a scalable and cost effective manner The system was a perfect fit for the customer in terms of price and performance.
Outcome: A Modern, High-Performance, Next-Generation Media Platform
Hitachi Vantara deployed a scale-out high performance ingest tier capable of growing as fast as customer requires. This new platform allows the customer to fully utilize its high-performance data resources, while maintaining headroom for future upgrades. Hitachi Content Software for File empowers this customer to have a cost effective, scalable, hyper-simplified environment enabling the creative teams to create high quality media types quickly.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Roberto Schaefer, ASC, AIC: The Challenge of Cinematography Is “Figuring Out How to Make It Work”
TL;DR
The work of a cinematographer begins and ends with guiding a director’s vision to screen, but the path is rarely clear, says BAFTA-nominated Roberto Schaefer, ASC, AIC.
Successful interpretation of a director’s vision, he says, requires relationship management and persistence for a DP to be able to get their point of view across.
To Schaefer, the most interesting part of the job is the creation of images from scratch, but he believes there is a risk that this will be superseded by GenAI.
Schaefer will be leading a three-hour interactive workshop at NAB Show, “Script to Screen – The Cinematographer’s Process,” walking attendees through a cinematographer’s creative process of crafting iconic cinematic images.
Cinematography 101 is all about interpretation, Schaefer explains.
“Some directorial communication can be so obtuse you really need to work hard to discern what it is they wish to say.”
“Many issues are a matter of management such as liaising with VFX and keeping on top of post-production.”
Schaefer is a BAFTA-nominated cinematographer who began his film career in Europe working for Martin Scorsese, Joe Pykta and Nestor Almendros. He eventually moved to Los Angeles to shoot for director Chris Guest on films including Best In Show and Waiting for Guffman.
In an extensive collaboration with director Marc Forster they made Monster’s Ball, Finding Neverland, The Kite Runner and Quantum of Solace. Schaefer has also shot The Host, Red Sea Diving Resort and episodes of Amazon sci-fi The Peripheral.
During his three hour workshop at NAB Show, Schaefer will show clips from The Paperboy (2012) explaining how he worked with director Lee Daniels to find the visual language of the movie.
“I like to get general ideas of the tone and feeling from a director and if there are any specific ideas they have in mind. There are typically references from stills or other movies to lean on but I like to be given a free hand to interpret the movie in a way that I think fits the visual context of the story. It’s not okay for me to copy a shot. from another movie. It is fine to use these as an influence but let’s find a way to integrate it into our language.”
Schaefer will also show clips from Stay (2005) directed by Forster which involved some detailed visual effects sequences. He will share elements of the original script along with pre-production diagrams and previs material to show the progress of bringing a concept from page to screen.
The best laid plans are at risk of being undermined by budget, so compromises or alternative solutions may need to be found.
“You might have this great vision and you might want to translate it in a certain way but then the budget people come in and say ‘Sorry, you can’t do that.’ So, you have to figure out another direction.
“Even if you have $250 million they want $400 million worth on screen. If you have $100,000 they want a million dollars’ worth. That’s never going to change. You start with your wish list and then try to figure out how to make it work,” he says.
“Ultimately, you want to be a participant and a collaborator so finding a way of deciphering what the director says is really important — and some of them you really have to decipher.”
Schaefer’s warning when it comes to working with VFX will be familiar to many DPs. It’s all about who has control and authorship of the final image on screen.
“In my experience, sometimes VFX are not really ready to collaborate that much with you. It’s almost like, you’ve done your job photographing, now it’s our turn. What can happen when you get to the DI is that the front plates don’t match your original photography,” he explains.
“It’s a problem especially for many young DoPs who don’t know how to work with VFX at a time when the work is getting more complex. The bigger budget shows will often have five or more VFX vendors onboard. The key is to have a dialogue at the beginning in pre-production. You want to get talking with them to understand their process and for them to understand yours.”
Successful interpretation of a director’s vision is also about relationship management and how persistent the DP is in getting their point of view across.
“The director might tell you what they want and the DoP will go ahead and do it but then realize that the director really doesn’t have a clue visually. This is rare, but it happens, so you have to be prepared to take that burden on and even challenge the director. You’d be diplomatic saying something like: ‘I think this is what is best to bring your ideas across’; and ‘let me show you what we can do,’ but you might have to be persistent. Some young DoPs are afraid to question the director too much,” says Schaefer.
“Of course, it is the director’s movie in the end, especially if they are also the writer but if you’re hired as the DP you have an agreement with them. It’s like a contract and you want them to stand by it and honor your side of the contract which is essentially a respect for your craft in shaping the color and the composition and the light and the story.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
To Schaefer the most interesting part of the job is the creation of images from scratch but he believes there is a risk that this will be superseded by GenAI.
“There are already so many ways to affect the image, not only in the camera, but afterwards in any number of software programs. Even FilmLight’s grading system Baselight and DaVinci Resolve has incredible tools where you can make the image look as if it were shot with an anamorphic lens. You can shape the aberrations. Make distortions, add flares. There are so many things that can be done to the image after you’ve done your job — and you don’t even know they’re going to do it,” he says.
“I’ve seen editors change the color in the Avid and then present to the director as if that’s what it should be. As cinematographers we have to be aware of all of these things and try to keep them under our control if we’re to guarantee that the final image is what you had discussed with the director.”
He adds, “AI is definitely something that we have to be really wary of. With AI moving so fast I’m not going to predict how many years it will be before we lose control but it’s looking like it’s going to happen eventually.”
The Spice Splice: “Dune: Part 2” Editor Joe Walker, ACE on Worldbuilding and Workflows
TL;DR
For editor Joe Walker, ACE, “Dune: Part 2” was a delicate juggling act to balance the intimate love story with the spectacular VFX.
Walker talks about layering sound to create scenes such as the gladiatorial battle of the minds featuring Austin Butler’s villain.
Dropping clues to the simplicity at the heart of the storytelling, Walker compares parts of the film to James Bond movies and animated Road Runner cartoons by Chuck Jones.
The love story between Paul Atreides (Timothée Chalamet) and Chani (Zendaya) is at the heart of Dune: Part Two. The same set of filmmakers, led by Denis Villeneuve and including cinematographer Greig Fraser, composer Hans Zimmer and editor Joe Walker, ACE continues to tell the complex saga with confidence and boldness.
“Denis once said to me, this film should be less like Lawrence of Arabia and more like Chuck Jones’ Road Runner,” Walker noted to Steve Hullfish in the Art of the Cut podcast. “And in fact, if you look at my cutting room, I’ve got a picture of Road Runner. It’s on the wall.”
Walker is speaking at the 2024 NAB Show at “American Cinema Editors Present: The Cut from Rough to Art,” on Sunday, April 14 at 2:00 PM. Moderated by ACE president Kevin Tent, this interactive workshop will explore the craft of editing, career paths, working with the director, and the editorial storytelling process.
In a film that covers so much narrative ground, flashing forwards and backwards, it’s remarkable how little exposition there is.
“What I think we did well in Part One was disguise how much setting up there was for this second film,” Walker told Daron James at the Motion Picture Association’s The Credits. “It meant many of the characters had limited screen time in the first film, but with Part Two, they are given a little bit more space for the drama to unfold.”
Walker won the Academy Award for the first film, which ended with Paul meeting Chani and going into hiding with the desert people, the Fremen.
“Their relationship was the most important thing to get right in the film. We are leaning into action adventure and dazzling sequences, but if the heart isn’t in the right place, then it’s not going to work. We spent a lot of time taking care of that relationship,” he says.
“It’s interesting to see how Paul must transform from a young adult who, when we first meet him, is a guy dreaming about a girl who doesn’t want to practice with his mother at the breakfast table. But through the course of it, it becomes, first of all, a man, and then this superpower in a way.”
Indiewire’s Bill Desowitz describes the transformation of Frank Herbert’s classic sci-fi novel as “a high-octane Lawrence of Arabia in space: an epic love story and political adventure.” It’s the action that takes center stage in Dune: Part Two, requiring a faster pace, more compression of time and less exposition over the two hours and 46 minutes of runtime.
“We wanted the film to shift as nimbly as possible between the ‘bignormous’ and the intimate, while still devoting time to craft Paul and Chani’s relationship,” Walker told Desowitz.
“In terms of our editing process,” he said, “the most significant focus was on scene positioning. Editing a big ensemble film is like making an Alexander Calder mobile. Lean too heavily on one aspect and the whole thing tilts; spread the storylines too evenly and it’ll damage the impact of the design. It’s a delicate juggling act.”
Villeneuve conceived of Part One as “the appetizer” with the sequel being the main course, “where everything’s set up, and you can just enjoy a damn good action-adventure story,” Walker explained to Hullfish.
“That’s not to say I don’t appreciate great dialogue, which I really do,” he said. “We spend hours and hours, finessing, improving, trying to get the best clarity. Generally, in this story, clarity is a hyper narrative. Dune is a vast ensemble piece with a complex story and complex backgrounds and Frank Herbert’s almost fractal approach to storytelling [so] we had to have utter clarity and delivery of ideas.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The scene, which introduces the film’s villain, Feyd-Rautha (Austin Butler), is a standout and not just because it is shot in stark black and white using infrared photography. It’s a gladiatorial fight in a huge arena that is both a display of spectacular power and ambition and a duel between the minds of Baron Harkonnen (Stellan Skarsgård) and his ruthless nephew.
“What was fun cutting that sequence was creating a world not just visually, but in sound terms, something that doesn’t sound like a 21st Century sports event but has its own unique flavor,” Walker told The Credits. “We spent a long-time developing layers upon layers of different Harkonnen sounds.”
These includes sounds based on the [native New Zealander] Māori Haka chant. The sound team gave Walker stems of all sorts of audio that he could layer into his timeline.
“In the middle of the timeline was the pre-mixed dialogue and spot dialogue,” he told Hullfish. “They recorded at some point, a huge group of heavy metal rockers to kind of amplify the kind of psychotic nature of the crowd.”
The audio was so complex he would turn it off at times and cut the visuals first. “I’ve got 36 sound channels, all staggered and overlaid. If you are adjusting one image, you’re adjusting 36 cuts. So the most efficient way to adjust that kind of sequence is to turn sound off, cut the sequence and then work on the sound to kind of knit it back together again. Sometimes I just want to feel the rhythm of things in my head in silence, and then then you can kind of complement it with the sound.”
The film’s most euphoric sequence is Paul’s triumphant ride of a colossal sandworm. It’s a scene that has been 40 years in the making since Villeneuve first drew storyboards of it as a teenager and required three months on location.
“The very first thing I saw of Dune: Part Two was previs for that scene,” Walker tells IBC365. “It was meticulously worked out and shot by a dedicated unit under the command of producer and second unit director Tanya Lapointe but in the cutting room it was like a jigsaw.
“Denis described the effect he wanted as being a kid on the back of a school bus, the axle bumping,” Walker continues. “There was to be the sense of there being no purchase on the worm. You can’t just lie there because it will throw you off. Then, Denis said, ‘it was like being on a skyscraper — a skyscraper turning.’
“When he used those words, Chris [Christos Voutsinas, additional editor] and I dug into our archive of sounds for girders grinding and massive ships moving to match with the huge strength of this worm coursing through the sand.”
The first cut of the sequence was so cacophonous that it dissipated the overall impact. Then they began to deconstruct it. “When everything is noise, the music’s pounding and people are screaming, there is no shape,” Walker says. “So, we turned off the music in the first part of the scene. As Paul sees the worm and begins to run towards it, we just play sound FX to build the anticipation and anxiety of this unpredictable unstoppable beast.”
To emphasize the major story point of this scene, we hear Dune’s signature tune. “It’s our Bond theme,” Walker says. “We’ve deliberately starved the film of that particular piece of music until the point that Paul stands up on the worm. There is something religious about that moment.”
Cinematographer Grieg Fraser used infrared cameras to shoot the black-and-white sequence in director Denis Villeneuve’s “Dune: Part Two.”
April 2, 2024
Posted
April 2, 2024
The “True Detective” Editors Want You on the Case
TL;DR
Editors Mags Arnold, Brenna Rangott and Matt Chessé discuss the making of “True Detective: Night Country,” directed and written by showrunner Issa López and starring Jodie Foster and Kali Reis.
Speaking to Matt Feury on “The Rough Cut,” the editors discuss about how “clumping” is better than cross-cutting, and how they leave the audience “breadcrumbs” of clues to able to almost but not quite solve the puzzle at home.
One chief task was to chief task was to unpack a story-within-a-story told over time — the backstory between the two lead detectives using flashbacks and audio/video of Matthew Broderick’s Ferris Bueller singing “Twist and Shout.”
A clue to the popularity of HBO anthology series True Detective is that viewers themselves enjoy turning sleuth. In the latest hit season, Night Country, directed and written by showrunner Issa López, the task was to solve the mysterious disappearance, and even more mysterious discovery, of eight men from a research station in the Alaskan ice. The law enforcement officers on the case are Detectives Liz Danvers (Jodie Foster) and Evangeline Navarro (Kali Reis). Mags Arnold, Brenna Rangott (who previously worked with López on the 2019 series Brittania) and Matt Chessé, ACE all served as editors of the season.
A chief task was to unpack a story-within-a-story told over time — the backstory between Danvers and Navarro using flashbacks and audio/video of Matthew Broderick’s Ferris Bueller singing “Twist and Shout.”
“Some flashbacks organically made their way into the edit as the series progressed,” Rangott told The Rough Cut’s Matt Feury. “Those were places where we felt like it was helpful to nudge the audience. But as far as ‘Twist and Shout,’ that was all very much in the script. It wasn’t an afterthought but there were moments where we needed to add more of it to give the audience an idea of its link to everyone’s past traumas.”
She adds, “There’s a sort of cross-pollination of traumas coming into reality and ‘Twist and Shout’ is a good example of it. We wanted to hit on it to give the audience an idea that Danvers associates that song with her past trauma. There were a few more moments where we added it where it wasn’t scripted.”
The drama reaches its pivotal moment at the end of Episode 6. For Chessé this was “the control point that told us how much we needed, how much we knew, and when we knew it. So, we reverse-engineered it from Episode 6,” he said.
“It was a cool process. People would slip stuff under the door, and I’d have to craft it into my episode. I think that’s the way it has to be on something like this. It’s like you’re having a dialog throughout the show with these elements and you have to work it where it happens.”
The trio could access a pool of shared shots to help assemble their individual episodes.
“Sometimes you need a shot that acts like a palate cleanser,” says Chessé. “But it’s got to resonate with whoever you’re leaving or who you’re going to. And they would double up sometimes. The assistants had to go through the episodes and spot, ‘We’ve seen that snowman too many times. We can’t repeat ourselves.”
Fans of the franchise enjoy playing detective, so the filmmakers pepper the series with clues and red herrings giving them enough rope to solve the puzzle but not too much as to be confusing.
“We’re not beating things over the head, leaving things for people to interpret and talk about after the episode,” Chessé says. “I think Issa had a great sense of playing with that. She knew what to hang on to, what to hold back, what to pay off, and what to let go. It seems like she had great taste because everybody loved the show. I didn’t have people afterward asking me to explain things to them. I think the takeaways were solid.”
On reading the script, Arnold cites a scene between Danvers and Navarro in Episode 1 where they talk about something that’s happened in the past. “What they said made me think that there was some sort of hinterland and that they’d had a prior relationship. I thought, ‘Not only is this Jodie Foster, True Detective, and Issa Lopez, but now Jodie is going to play a gay person as well. I was super excited. Of course, it didn’t go that way. I didn’t misunderstand, but it could be interpreted either way.”
To help them assemble all the story elements in order that ultimately made sense and that kept audiences engaged they employed a process they call “clumping.”
Arnold explains, “It’s when, instead of cross-cutting between characters, you go to two different characters playing their scene and then you come back, which is how it was written, just to keep it exciting. What we started doing was clumping things together so that you could get a sense of being with these characters a little bit more.”
Chessé details this further, “A lot of times you watch a show that has multiple characters and there are certain things you’re more into than others. But you don’t want the audience to have that feeling of, ‘I don’t want to be with this person right now. I want to stay with that person.’ You want to feel like you’re cutting away at a point that seems organic in terms of interest level and comprehension of the story,” he says.
“You’re on a journey collecting little clues so you’re going through the episode thinking, ‘I should remember that. That seemed important.’ You have to lay that breadcrumb trail out for people in terms of their interests and allegiances to the characters so that they’re bonding with this entire town that they have to meet. What is their relationship? We don’t say it overtly. You have to have those ‘reveal’ scenes close enough together that you can track them. If you put them too far apart then you’ll forget how everything’s connected.
“That’s the strength of being the editor, getting to be the first responder. You’re selling it to yourself first. So, whether the director feels a certain way or not, you can use that as an excuse to say, ‘As the audience member, I feel like I need to know this now. Can we try this over here?’ You have to make it make sense to you before it can make sense to other people.”
For those following along at home, that word again is “clumping.”
“I’m working on having it trademarked,” Chessé remarks to Feury, “Are we doing a clinic on clumping at the next ACE gathering?”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Raisani and Spates oversaw an enormous amount of effects work for the otherworldly series, and made extensive use of Ferstl to complete some of this work as part of the color grading process.
Ferstl and finishing editor Mike DeLegal employed Blackmagic Design’s DaVinci Resolve for this unusual workflow, which bypassed the standard VFX protocol of pulling each shot and sending it out to a dedicated effects house to do the work.
“Visually, the show remains loyal to its source material, impressively recreating the stunning landscapes and dynamic action that initially captured fans’ hearts,” says Karama Horne in her review of Netflix’s Avatar: The Last Airbender, a live-action adaptation of the immensely popular animated Nickelodeon series.
“The top-notch production design brings settings like Ozai’s eternally flaming throne room and the windswept vistas soaring over Aang’s sky-bison Aapa’s fur to life [see above]. Some characters, like the Kyoshi Warriors, appear lifted straight from the iconic frames of the cartoon.”
The series debuted at number one and was almost immediatedly renewed by the streamer for two more seasons.
Executive producer/VFX supervisor Jabbar Raisani and VFX supervisor Marion Spates oversaw an enormous amount of effects work for the otherworldly series and made extensive use of senior colorist/finishing artist Siggy Ferstl of Company 3 to complete some of this work as part of the color grading process, rather than going through the standard VFX protocol of pulling each shot and sending it out to a dedicated effects house to do the work.
This method, which considerably expanded what could traditionally be accomplished in color grading sessions, was possible because Ferstl’s dedication to using the tools within his color corrector of choice, Blackmagic Design’s DaVinci Resolve, and because of that manufacturer’s pronounced efforts to constantly expand the toolsets within Resolve. Raisani, Spates and Ferstl will speak about this unusual workflow during the discussion, “Avatar: The Last Airbender – Expanding the Role of the Colorist into VFX Artist.” Moderated by colorist, image scientist and educator Cullen Kelly, the session is part of the Core Education Collection: Create Series, and will be held at 4:00 PM on Monday April 15 in rooms W216 and W217.
The VFX supervisors and Ferstl will explain how this workflow came about, inspired in part by some shots Ferstl had done with Raisani on Netflix’s hit series Lost in Space; how the workflow provided the series’ creatives a significantly more immediate and interactive way of iterating many of the visual effects shots, sometimes saving days in the process; and how the combination of VFX and final color grading helped to facilitate discussion and bring a unified feel to some of the complex imagery that could normally involve days of back-and-forth notes.
Ferstl will also drill down into some of the most challenging aspects of this type of work that he and Finishing Editor Mike DeLegal (sharing media and timelines within Resolve) accomplished and which of Resolve’s plethora of relatively new tools they used and what he did during his extensive preparation for this project to hit the ground running as scenes started coming in to Company 3. Effects he created, either wholly or in part, included altering foliage and surroundings to create certain fantasy environments, building and integrating digital lighting effects, creating digital “lens” and diffusion characteristics over the finished image and enhancing a number of key time-travel related transitions in the series.
Additionally, panelists will also share their thoughts about where this expansion of the color grading process into aspects of VFX creation could be of creative value for more shows in the future and discuss what type of skills current and prospective colorists should consider developing in order to be prepared.
The one-hour event, complete with illustrative scenes from the series and a special breakdown reel prepared by Ferstl, promises to be fascinating and informative for anyone currently working in the fields of color, VFX, overall post or production and direction of VFX-heavy shows, as well as those who hope to be soon.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
While assembling “Dune: Part 2,” editor Joe Walker, ACE referenced both James Bond movies and animated Road Runner cartoons by Chuck Jones.
April 2, 2024
Posted
April 2, 2024
Navigating the Future of Post-Production: AI, Efficiency, Economics, and the Science of Saying “No”
TL;DR
Post-production veteran Scott Simmons examines two of the biggest trends the industry currently faces: artificial intelligence and the ongoing challenge of delivering high-quality work amid shrinking budgets and tighter deadlines.
AI, Simmons says, has the potential to enhance job performance rather than replace jobs, urging professionals to view it as a tool for improvement in the face of industry evolution. He also advocates for the necessity of setting realistic expectations and sometimes saying “no” to protect quality and well-being.
He critiques as well the industry-wide shift towards subscription pricing models: “While monthly pricing is great for the company, there are some tools that just shouldn’t ask the customer for a monthly fee.”
At NAB Show, Simmons will lead nearly a dozen workshops focused on editing at POST|PRODUCTION WORLD, including remote workflows and collaborative post-production.
The post industry stands at a crossroads as the relentless push for more efficient workflows present both unprecedented opportunities and daunting challenges. But as longtime industry pro Scott Simmons notes, it’s essential to recognize that the path forward is paved with both innovation and introspection.
As a seasoned freelance editor, Simmons has worked on a wide range of multi-cam, broadcast TV and music video projects over the course of his career, which spans more than 25 years. He has a featured role at POST|PRODUCTION WORLD at the upcoming NAB Show, leading nearly a dozen workshops focused on editing, including tips for remote workflows and collaborative post-production.
Simmons casts a spotlight on AI’s burgeoning role in post, naming it as one of the two biggest industry trends.
“Everyone is wondering what the long-term ramifications will be as AI penetrates deeper in the post-production industry,” he says. “While software tools and cloud-based services are pushing AI more into all areas of post-production, everyone is wondering if and when AI will take jobs. But the correct way to look at AI now is: how can AI help me do my job better?”
Another enduring trend, according to Simmons, is the pressure of decreasing budgets coupled with tighter delivery timelines. This scenario is all too familiar to post-production professionals who strive to produce their best work under increasingly stringent constraints.
“Delivering quality work becomes more difficult as budgets and timelines decrease,” Simmons observes. The industry’s relentless demand for faster, cheaper output challenges not only the quality of the work but also the well-being of those who create it. It’s here that Simmons advocates for a crucial shift in mindset: the courage to say “no.”
“Everyone wants to do their best work, but at some point, you just have to say no, for your own sanity’s sake. Often people asking for miracles have never been told, ‘no, miracles can’t happen in the matter you have asked.’ Perhaps that is something they need to hear?”
Continuous learning and discovery is key to navigating a rapidly evolving post industry, Simmons advises. This ethos of lifelong learning is not just about keeping up with technological advancements; it’s about fostering a culture of curiosity and adaptability that enriches our professional and personal lives. “Here’s a little secret: Even veteran P|PW instructors like myself learn new things at Post|Production World,” he shares.
“It might be an interaction with attendees after a class or a discussion during a session that prompts some new thinking or a new discovery. But even better than that is, as an instructor, sitting in on a full class from one of my colleagues for a topic that I know nothing about. You’re never too old or too busy to learn something new, and that is what P|PW is all about.”
Simmons encourages NAB Show attendees to explore the show floor with an open mind, seeking out both established and emerging technologies. “Post-production happens in ‘big apps,’ so first, the attendees should seek out those companies on the show floor and thank them for being there,” he suggests.
But the exploration shouldn’t stop at familiar tools; Simmons urges professionals to investigate competing products and engage with smaller vendors. “Everyone has their favorite tools that you’d have to pry from our cold, dead hands, but all of those tools most likely have competitors. Go find those competing tools you might have never seen or just dismissed and give them a look. Kindly ask questions of the vendors. Take the time to learn a little about these products that you might not encounter outside of a show like NAB Show,” he continues.
“That’s where you just might find your next game-changing product or service.”
One of the critical discussions Simmons advocates for involves the industry’s shift toward subscription pricing models. “While monthly pricing is great for the company, there are some tools that just shouldn’t ask the customer for a monthly fee,” he says. “With both inflation and the ever-increasing cost of streaming services people are tired and frustrated when yet another tool moves to a subscription model. Let’s make that one of our main topics of conversation at NAB Show.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
How Should FAST Channels Navigate Content Congestion?
The free ad-supported television (FAST) landscape has become increasingly crowded and creators, publishers and broadcasters need to do more than simply make their programming available to viewers. That’s the contention of Nielsen.
“Still in its early days, the FAST ecosystem is playing catch up with respect to metadata — even basic genre information,” the measurement organization reports. “This, in and of itself, will limit monetization opportunities, as brands and agencies are unlikely to advertise against content without knowing the program genre.”
According to Gracenote Video Data, there are more than 1,900 individual FAST channels for audiences to choose from, with more than 1,300 in the U.S. alone. For context, there were just 1,000 in the U.S. mid-way through 2023.
That’s more than 21% growth in eight months. And what’s more, individual FAST platforms, such as Pluto TV, Amazon’s Freevee and Tubi, typically have hundreds of channels within them, providing ample choice within individual electronic program guides.
To make FAST channels profitable going forward will require data enrichment strategies. These will build on the basics of imagery and descriptions by providing a more complete picture of available content. Additions like mood, theme, scenario and setting “provide a new layer of information that can elevate the appeal of programming among audiences looking for something to watch.”
“The importance of normalized and comprehensive metadata can’t be over-emphasized,” Nielsen stresses. “Enriched metadata can help ensure that content stands out as FAST experiences build out enhanced search and discovery algorithms for recommendation engines.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
NAB EVP/CTO Sam Matheny discussed the importance of consistent measurement practices with Nielsen’s Paul LeFort.
April 4, 2024
Posted
April 1, 2024
Doors of Perception: How Consumers View the Possibilities for AR/MR
TL;DR
Amdocs’ “The Era of Mixed Reality Report” offers insights into user behaviors shaping spatial computing technology.
Consumers trust Apple when it comes to mixed reality. More than half of consumers feeling an association with Apple would make them more interested in an AR/MR headset.
Nearly 70% of consumers have limited understanding of AR and MR, and nearly a third don’t have a clue what it is.
New research by Amdocs reveals that consumers are eager for more advanced augmented reality (AR) and mixed reality (MR) experiences, particularly if Apple is involved.
“The Era of Mixed Reality” report from Amdocs found that users are looking for these immersive technologies to enjoy gaming experiences first and foremost, with shopping and exercise not far behind. However, there is also a significant knowledge gap when it comes to AR and MR.
“Just like the iPhone turned the internet into mobile, mixed reality will disrupt how we interact with our surroundings and immerse ourselves in new experiences,” said Gil Rosen, CMO of Amdocs.
“But first, we’ll need to ensure networks, connected devices and the content that runs on top work together seamlessly. With immersive experiences, there is zero tolerance for lag or quality degradation, and to make it amazing, the entire ecosystem needs to evolve as one.”
A staggering 67% of consumers have limited understanding of AR and MR. Half of consumers (49%) haven’t used it at all in the past three months. While a third (33%) are aware that AR can be used to gain digital insights while shopping or traveling, nearly a third of consumers haven’t a clue what AR or MR is.
Those that do are keen to use Apple product or associate Apple with AR/MR experiences, Amdocs reports. More than half (52%) of consumers felt an association with Apple on AR and MR would make them more interested in it, with 38% saying they would be likely to buy an Apple product.
Sixty percent of consumers, according to this report, would prefer to use a mixed reality approach in favor of a full VR metaverse (40%). More than half of users would be interested in trying the new technology, depending on how much it cost.
Anthony Goonetilleke, group president of technology and head of strategy at Amdocs, added, “These findings uncover several essential factors, first and foremost being the need for better education around what’s possible from AR and MR experiences, as well as preparing networks that can better support in-demand, intensive and seamless experiences. As AR and MR experiences become more widespread, we’ll see the rise of new-found ‘experience bundles’ that capitalize on specific personas, coupling connectivity with entertainment, education, enterprise and more.”
Spatial computing has been adopted by Apple to describe its latest “wearable,” Vision Pro. But there are those wondering if this isn’t the metaverse by another name.
March 31, 2024
Research: Peak TV Has… Peaked. So Here’s What’s Coming Next.
TL;DR
2024 represents the dawn of an uncertain new era for the TV business. After many cycles of seemingly limitless growth, an unmistakable decline has begun, and where it goes from here is anybody’s guess.
Prestige drama output has contracted, with multiple factors behind its demise. But what comes next? Variety’s VIP+ analysis offers some clues.
Post-Peak TV streamers are expected to lean more heavily on international content, take fewer risks, focus on sports and unscripted shows, and release episodes weekly in a return to network TV format and scheduling.
Born circa 2013, died 2023 — 10 glorious years marking the rise and death of Peak TV — is disinterred and examined in a new report, which also predicts what Hollywood and viewers can expect of the next phase of home entertainment.
In a nutshell: the rise and the demise of Peak TV (roughly the period between House of Cards to the finale of Succession), was the result of a number of factors not least being an oversaturation of prestige drama and a content spent that jumped from $139 billion in 2014 to $243 billion by 2022.
One thing we can be sure of going forward: fewer premium series, more content like the network cable TV of old.
2022 was actually the year that TV finally actually peaked. Luminate data shows 2023 saw a drop in original series output across all major streaming platforms, plunging from more than 2000 titles in 2022 to just over 1600 last year (20% drop year over year) — following two decades of almost continuous growth.
Arguably, as Aquilina outlines, the boom in TV began in the early 2000s, when cable networks realized that they could make themselves more valuable to viewers and therefore gain bigger audiences, by creating slates of original programming that viewers wouldn’t be able to find anywhere else. Think The Sopranos,Sex in the City and Mad Men.
But 2013 was when Netflix entered the equation, bringing originals like House of Cards and Orange is the New Black.
“This was a key tipping point in the history of TV because 2013 was the first time when the FCC measured an annual drop in US pay TV subscribers,” said Aquilina. “In other words, that’s when cord cutting really got going for the first time started being whispered about in Hollywood circles.”
Skipping forward to COVID-era 2021 and 2022, the amount of original content on streaming “just balloons” — but early 2022 also marks the first quarter when Netflix began losing subscribers as macroeconomic conditions bite.
“It’s enough to make everybody freak out and pretty much changed the calculus around streaming overnight. All of a sudden, Wall Street investors are paying a lot more attention to how much money these companies are spending and how much they’re losing on the streaming platforms.”
Studios like Warner Bros. were pouring billions of dollars into original content in order to compete with Netflix and racking up massive deficits in the process. This led to cuts in spending and that meant less TV shows.
VIP+ data shows that 2023 was the first year since 2013 when the number of original shows released on SVOD fell — from 1,000 in 2022 to less than 800 last year.
But that doesn’t quite paint a full picture. VIP+ and Luminate expect aggregate spend on TV content in Hollywood to go up this year. Only by 2% or $4 billion to around $247 billion, but a rise nonetheless, with Disney expected to be the biggest spender at around $33 billion.
“More significant than just how much these companies are spending is where this money is going. Because companies have shown they’re still willing to shell out for the right programming, which at this point mainly means sports broadcasts.”
Analysts and industry observers expect that spending on general entertainment TV is going to be flatter over the next few years meaning less money for TV and consequently fewer shows are going to be produced.
“The industry is headed for a contraction, there’s just not going to be the level of output that we had over the past decade,” said Aquilina. Factor in time spent by younger audiences on TikTok and other social media has a detrimental impact on time spent with streaming TV.
So how is TV going to change in this post-Peak TV environment? Apart from fewer originals there are likely to be a lot more shows based on popular IPs (like HBO hit The Last of Us) — shows that cater to existing fan bases.
“In other words, TV is going to look a lot more like current film studio slates with a lot of IP based blockbuster content, there’ll be a handful of prestige awards bait titles thrown in there to keep the Emmys coming in.
Shows like Mad Men, Atlanta or The Bear will have a much tougher time getting greenlit in this environment, he suggests.
The days of a large number of expensive to produce or creatively risky shows are likely over. “You can get more for your dollar with unscripted content like reality shows,” he said citing Netflix series Love Is Blind.
In tandem with that, expect more reruns of existing content — a trend that already supports a lot of streamed video consumption (think Friends, Grey’s Anatomy, The Big Bang Theory).
There’s going to be a greater reliance on international non-English language TV. For one major reason, it’s cheaper.
“These shows can be produced for less money internationally than shows typically cost in the US. They can be acquired for much less than you’d spend to create a new drama or comedy series shot here.”
Korean drama is the logical place to look for a hit but VIP+ points to shows emanating from Sub-Saharan Africa and India. That’s because with the Europe and US markets pretty saturated, major streamers like Netflix are targeting growth in other territories and are doing so by investing in local content.
“I wager that an Indian series may very well become the next Squid Game,” Aquilina said. “I think that in the next few years an Indian series really becomes the next big breakout international hit.”
How else will post-Peak TV change? The rise of AVOD and FAST, plus new provisions in the new writers and actors guild contracts, will see ratings reported from streamers who were previously black boxes when it came to exposing how their content fared.
That might be taken as a plus, at least for media companies with broadcast divisions. VIP+ highlights other silver linings too, although they are fairly thin.
For example, if peak TV was defined by supply far exceeding demand, “too many shows being produced and more options than anybody needed that could actually really benefit the health of the business in the long run,” Aquilina contends.
Because of that consumers may have an easier time finding something to watch — if there are fewer new options coming out every single week.
There may be fewer prestige dramas and comedies being produced, but this could benefit everybody in the industry. That’s because the post-Peak TV landscape will look a lot like the network TV of old: shows with broader appeal, shows with less extravagant budgets, shows with longer seasons, and shows with episodes released weekly.
“This is going to necessitate a less artisanal approach than many signature shows that the PTV era took and it’s going to mean a return to broader sensibilities that define network television. The new era of TV may result in far fewer classic shows, far fewer experiments, less daring shows, but there is a chance it could put Hollywood on a path to getting its business model back on track.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Cinematographer Patrick Capone, ASC, director Mark Mylod and senior colorist Sam Daley consider what made the series look and feel like that.
March 31, 2024
Why the Next Business Model for Hollywood May Be Web3, Blockchain and the Metaverse (Again)
TL;DR
Seth Shapiro, a two-time Emmy Award winner and an advisor at the intersection of media, technology and fintech, wants Media & Entertainment to reconsider Web3, blockchain and the metaverse.
Hollywood is in the middle of a sea change, Shapiro argues, driven by these newer modalities with the continuous evolution of audience behaviors and the economics of a post-strike Hollywood.
“People are already living more of their lives online,” notes Shapiro. “Disintermediation is real and it’s coming for entertainment.”
Remember Web3? Or Blockchain? Or the Metaverse? If think that was just a fad and a phase, even a scam, then think again. The next generation of the internet is being built on this technologies and Hollywood can either ride the wave or drown.
“At one point, the M&E complex and the technology complex were roughly equivalent in market power and negotiating strengths,” says Seth Shapiro, a two-time Emmy Award winner and an advisor at the intersection of media, technology and fintech.
“Maybe even, at the beginning, entertainment even had a stronger lead. But from a market cap perspective Big Tech has completely eclipsed the studios. I think among the M&E community there’s a sense of ‘how did that happen and what can we do about it from happening again?’
“The idea of this session is to talk about some of the ways that the NAB audience can leverage new technology to level the playing field and even gain a higher share of the spoils from whatever comes next.”
Shapiro is asking Media & Entertainment to think again about Web3, blockchain and the metaverse. Hollywood is in the middle of a sea change, he argues, driven by these newer modalities with the continuous evolution of audience behaviors and the economics of a post-strike Hollywood.
The first generation of the internet, which began in the 2000s, was greeted with widespread skepticism by studios (among other industries) that people would ever want to — be able to — transact online. Clearly, e-commerce was a gamechanger which should have given M&E a better chance to reset the business model with the arrival of Web 2.0. But it did not.
“During this period technology companies like Google and Facebook supplied short-term cash to media companies for IP and then they applied that IP to help grow their audience,” Shapiro says. “As a consequence their market cap and their audience multiples logarithmically. While studios gained short-term capital they were in fact helping grow the long-term enterprise value of Big Tech companies.”
You would think that M&E would be twice bitten, twice shy as it heads into the next era of Web3. And yet…
“A lot of people view Web3 as just a garbage term,” Shapiro says. “They’ve all heard ‘metaverse’ and ‘blockchain’ and NFT so many times and they’re just inured to it. They think it’s just more hype. Well, some of it is hype and some of it is scam, but the absolute truth is that that was exactly the same at the beginning of Web1 and Web2.
“There were just as many scammers trying to rip off investors or consumers in Web1 and Web2 as with Web3 but the core technologies at the foundation of Web3 are inevitable. Future generations are clearly going to spend more time in in the metaverse, in virtual worlds and gaming environments,” he continues.
“The fact is people are already living more of their lives online. The digitization of currency and direct transactions between consumers and whoever they want to buy from whether it’s an influencer, creator or a studio is happening. Disintermediation is real and it’s coming for entertainment.”
Taylor Swift shopped her Eras Tour concert film to the studios, but the studio machine would only agree to release it in 2025. Wanting the film to be available as soon as possible for fans, Swift bypassed the studios and took the film straight to theaters.
Another classic example is the deal George Lucas struck with Fox for Star Wars in 1977. While the studio landed the film rights, the far greater piece of business was in the merchandise.
“Fox didn’t realize where the real value was but that’s what’s really interesting. So let’s talk about and look at the ways we need to change and think about IP to more effectively capture where the real upside is going to be. It is changing now and the models are changing fast.”
Then there’s AI. While generative AI, digital ownership, and the creator economy change the face of media and update the flow of revenue, how do writers, producers, and technologists set themselves up for the next wave?
“Together with blockchain infrastructure and virtual worlds AI will interlock to create the next great series of economic opportunities for tech companies and for media companies if they grasp what they need to do,” says Shapiro.
“The source of the IP will wind up in a much more direct relationship with the buyer of the product. That’s where studios have a huge opportunity. IP creators have a huge opportunity or risk being left behind,” he predicts.
It’s not as if M&E didn’t see it coming. The major record labels buried their head in the sand when new digital streaming technology enabled artists to connect directly with music fans.
“The triumvirate of blockchain, metaverse and AI is going to take other people out of the M&E equation the same way,” Shapiro says.
“The major record labels were essentially in the trucking and manufacturing business. Their real leverage was the fact that they could print your records and get them into the store. That’s all gone. Those pieces of the business no longer exists but in the process of that they’ll be new opportunities.”
That’s the optimistic note. In Shapiro’s view the root cause of many of the most successful businesses and business models of our time, including big tech, is IP.
“More often than not the traditional media business was involved in some sense in creating quality IP. But Big Tech has so far been adept at taking that IP and amplifying it in such a way that they acquire millions of users. Studios might get the license fee, which is what they asked for, but they didn’t get any of the upside in terms of all value generated from that transaction,” he says.
“Those who have the best ability to market and monetize IP will win. That means Hollywood, producers and M&E executives should make a real effort to learn the new tools and learn the new ecosystems that are coming so that they can more effectively monetize their content.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Spatial computing has been adopted by Apple to describe its latest “wearable,” Vision Pro. But there are those wondering if this isn’t the metaverse by another name.
March 31, 2024
Quality AI Models Don’t Have to Include Controversial or Copyrighted Content, Right? Right?
Since the beginning of the generative AI boom, there has been a fight over how large AI models are trained. In one camp sit tech companies such as OpenAI that have claimed it is “impossible” to train AI without hoovering the internet of copyrighted data. And in the other camp are artists who argue that AI companies have taken their IP without consent and compensation.
Adobe’s approach is unusual for a major tech company in that it has tied its future to building generative AI products without scraping copyrighted data from the internet.
In an exclusive interview with MIT Technology Review, Adobe’s AI leaders are adamant this is the only way forward.
At stake is not just the livelihood of creators, they say, but our whole information ecosystem. What they have learned shows that building responsible tech doesn’t have to come at the cost of doing business.
“We worry that the industry, Silicon Valley in particular, does not pause to ask the ‘how’ or the ‘why.’ Just because you can build something doesn’t mean you should build it without consideration of the impact that you’re creating,” says David Wadhwani, president of Adobe’s digital media business.
Adobe says it wants to reap the benefits of generative AI while still compensating people for the work that has gone into training data. It also argues that by not indiscriminately scraping the Web there’s less risk of bias and misinformation being output. Adobe’s model has never seen a picture of Joe Biden or Donald Trump, for example, “and it cannot be coaxed into generating political misinformation.” It has not been trained on any copyrighted material, such as images of Mickey Mouse.
On the other hand, tech companies like OpenAI argue that they are able to build more powerful AI models for the benefit of everyone, including artists. The misuse or mistakes in public GenAI tools can be used to improve the model, they maintain.
While Big Tech is the prevailing model for AI use, are Adobe and other supporters of content verification initiatives winning the argument?
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
AI has a visual plagiarism problem, raising legal challenges and the urgent need for industry collaboration in ethical AI development.
March 31, 2024
Cinema Audiences Want That Engagement and Emotion. Here’s How to Rethink Release Strategies.
TL;DR
The movie theater business has now entered the true digital age. That can and should mean more content supply, better rates, and more flexibility for target audiences, says Jackie Brenneman, founding partner of cinema industry consultancy The Fithian Group.
The reason discussion surrounds Taylor Swift’s concert film was that the marketing strategy was so effective, states Brenneman.
Since almost all movies exist to allow viewers an emotional experience, a movie theater is the best place to experience that catharsis of emotional release, she emphasizes.
This year, audiences flocked back to theaters for movies like Barbie and Oppenheimer, with recent research from Omdia predicting that theatrical releases will generate close to $50 billion globally this year.
The panel, moderated by Carolyn Giardina, senior entertainment technology & crafts editor at Variety, will also feature Laurel Canyon Live president John Ross and independent director Sam Wrench.
“It’s still a very active and viable distribution platform for content owners and a place to experience something unique for consumers,” Brenneman says. “We just need to get a lot smarter at marketing it.”
“It’s not that music is specifically the future of cinema, but that the future of cinema is increased diversity of content appealing to all audiences,” says Brenneman.
It just so happens that Taylor Swift’s Eras tour concert film is tailor-made as a case study.
“It’s about marketing and awareness. So, the big differentiator and the reason we’re having this discussion surrounding Taylor Swift, was that the marketing strategy was so effective. In a single tweet she was able to alert hundreds of millions of people to her new movie, at a time when fans either couldn’t afford tickets or found her tour sold out.
“She used cinema as a means to go direct to fans. And it was just perfect timing,” says Brenneman.
“Of course, not everyone has such a great connection with fans as Taylor Swift but there are a lot of lessons to be learned. It showed that there is a way to really tap into fan desire and to do so affordably and effectively.
“With Barbie and Oppenheimer, the same idea applied. Fan-driven attendance and awareness is really key. It meant that marketing is both the challenge and the opportunity,” Brenneman elaborates.
It’s also important to understand that the movie theater industry has undergone a seismic change in the last couple of years.
For virtually the entire history of the movie theater business there’s been a large, fixed cost in getting any piece of content into a theatre. It would cost a thousand dollars or more to make and deliver a movie reel to a theater.
Studios had to be selective about which theaters they picked so as not to play too many competing films in certain markets. Since the switch to digital the industry introduced the virtual print fee (VPF) where the studios subsidized the cost of transitioning to digital projection equipment. Now those VPFs have expired.
“To all intents and purposes, they are gone so all of a sudden, we are in a true digital age of cinema,” Brenneman says. “That can and should mean more supply, better rates, and more flexibility to target audiences.
“We are just at the dawn of this. What the Eras concert showed is that you can make an offer to the entire market and allow exhibitors greater control.”
Prior to forming The Fithian Group, Brenneman was EVP and General Counsel to the National Association of Theatre Owners. In those roles, she was a frequent speaker and panelist at global exhibition events on the importance of data and optimism in shaping the narrative and future of the industry.
Recent research has shown how important social media (especially influencers or creators) is in driving awareness of new film and TV shows. Whilst this should be part of the media marketing mix, a reliance on social could exclude large parts of the population.
“We risk getting into this vicious cycle where we think only certain types of people go to the movies, because those are the only people that we are actually speaking to,” Brenneman says.
When you break down the demographics of big blockbusters its clear older audiences are coming in the same proportion that they were coming before the pandemic.”
Movie theaters, she will argue, have such a special place in the community. “Not all communities exist online. We know how to start to reach online communities but what about real communities of real people? A lot of influencing is done in the real world. Movie theaters are right there in the hearts of their communities. They can actually speak to real communities, offer promotions to local schools, or local senior centers, charities, and community groups.”
“If you’re able to tap into your local in-person influencer groups and market to them you can find the groups that would be most likely to see something like Taylor Swift or the Metropolitan Opera or whatever narrative feature or alternative content you are showing.”
A lot of recent discussion about the future of cinema pitted it against streaming which she says she never understood. “Streaming is an in-home option and far more of a competition to cable, but it got into consumer’s minds that home viewing was a replacement for cinema. Which is bizarre, because when people were going to Blockbuster every weekend and renting three movies and still going to the theaters no one thought to question its viability.”
Brenneman continues, “Post-COVID, and cinema has come back stronger. It’s clear they are not going to kill one other. Even while consumers have had their streaming habit entrenched while they were stuck at home when movie theaters re-opened, they started coming back in record numbers.”
That has to be because going to the movies is different than watching a movie at home. It is a completely different experience. This is not only because movies played back in IMAX-style large format screens and in enhanced properly calibrated surround sound are in fact a better and different experience than watching at home; there is also neurobiology research to back it up.
“Human beings didn’t evolve to actually regulate and feel and process emotions alone. We look to others to validate our response,” Brenneman says. “We don’t know how to feel when we are alone. We don’t laugh all by ourselves. We need to read the room and learn how to feel. The human emotional experience is a shared experience not an individual experience. We didn’t evolve that way.”
“Since almost all movies exist to allow you to have an emotional experience a movie theatre is the best place to experience that catharsis of emotional release,” asserts Brenneman.
“In addition, there’s still no cheaper way to get out of your house and do something chill, than going to the movies. People can talk about price all they want, but find me a cheaper, chill alternative to take a bunch of friends or family out of your house. There’s nothing better.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Through “The Eras Tour,” Taylor Swift is “passing a $100M screen test,” according to Variety, and other critics agree the concert film is worth a watch.
March 31, 2024
Creator Economy Amplified: The New Studio Frontier
Watch “Creator Economy Amplified: The New Studio Frontier.”
TL;DR
The rapid evolution of production technologies is making advanced studio tools and techniques accessible to creators at all levels, enhancing the quality and impact of content across Media & Entertainment.
Video production wizard Ryan Grams from Studio Upgrade joins Jim Louderback, editor & publisher of “Inside the Creator Economy,” and veteran journalist Robin Raskin to share insights into building versatile, efficient, and technologically advanced studios that cater to a range of creative needs.
Backed by his “start with what you have” ethos, Grams founded Studio Upgrade to address the burgeoning need for professional-grade virtual communication during the pandemic, providing online production courses alongside everything from pre-built kits to bespoke high-end studios.
Louderback and Raskin point to the proliferation of virtual production tools in our phones and on social media platforms that are enabling solo creators to produce high-quality content from anywhere, breaking down traditional barriers in the creator economy.
The rapid evolution of production technologies is revolutionizing the creator economy, democratizing content creation by making advanced studio tools and techniques accessible to creators at all levels. This transformation is not only enhancing the quality and impact of content but also expanding the creative possibilities for storytellers, helping to shape the future of Media & Entertainment.
As part of NAB Amplify’s “Creator Economy Amplified” series, video production wizard Ryan Grams, CEO & Founder of Studio Upgrade, sat down with Jim Louderback, editor and publisher of Inside the Creator Economy, and veteran journalist Robin Raskin, founder and CEO of Virtual Events Group, to share their insights into building versatile, efficient, and technologically advanced studios that cater to a range of creative needs. From immersive environments to flexible, home-based setups, discover how the new studio frontier is expanding the horizons for creators everywhere. Watch the full discussion in the video at the top of the page.
The conversation offers a preview of the all-new Creator Lab at NAB Show, a dynamic space dedicated to exploring the newest trends and technologies driving the creator economy. Led by Louderback and Raskin, the Creator Lab will host an extensive lineup of discussions and interactive workshops, with industry experts including Grams offering valuable insights and practical skills to attendees.
The Transformation of Content Creation
With more than two decades of experience creating compelling content for powerhouse brands like Google, UPS and Walmart, the onset of the pandemic marked a pivotal moment for Grams.
“A big thing that I noticed, of course, was all the virtual meetings that we were having,” he says, describing his background up until then serving as a shooter, editor and producer.
“Quite frankly,” he shares, “the way that I was able to keep putting food on the table was to begin helping professionals and leaders and executives level up the way that they were showing up in their virtual content.”
Backed by his “start with what you have” ethos, Grams founded Studio Upgrade to address the burgeoning need for professional-grade virtual communication during the pandemic, providing a wealth of online production courses alongside everything from pre-built kits to bespoke high-end studios.
The traditional model of showing up to a location with heavy equipment gave way to a more democratized approach, where creators could build their own studios with guidance from experts like Grams. “Instead of what used to be me showing up to some place with a bunch of cameras and lights, I was teaching people how to start their own — sometimes small, sometimes large — personal studios, and that’s now become my livelihood,” he explained.
Cost-Effective Studio Technologies for Solo Creators
Grams’ transition to teaching others how to build virtual studio setups mirrors a wider shift in the creator economy, highlighting how accessible virtual production technologies are enabling creators to produce high-quality content from anywhere. This change is breaking down traditional barriers, making it easier for more creators to share their stories and connect with audiences globally.
Whether enhancing virtual meetings or creator-led productions, “there’s just such great technology out there,” says Louderback. “The ability to put up a green screen and do compositing and make it look really good, with a much better camera, and just a slightly faster computer, can really add so much to what you do.”
Expectations around video quality and audio clarity have raised, Raskin notes, a trend accelerated by the pandemic. “It was enough that the dog could dance… and we were just happy seeing postage-sized stamps of people on a video call,” she recounts. “We kind of all fell into this together, I think, and came out learning simple techniques about lighting.”
Raskin doesn’t come from a production background, she says, but her experience during the pandemic helped her build a toolkit — an external camera and mic combined with an OBS camera system — that she now carries wherever she goes. “The stakes have gone higher, so much faster,” she comments, “and video has become something of a lingua franca in how we communicate.”
Grams stresses the foundational importance of high-quality audio. “The most important thing to start with is, of course, a better microphone,” he advises, suggesting that even modest investments in audio equipment can drastically improve content quality.
“Buy a $30, $40, $50 microphone, whether it’s a little clip-on lavalier or a podcaster-style microphone,” he urges. But while having a good mic is a great first step, he notes, it still needs to be properly employed. “Having it three feet away is not going to sound good. You gotta have it closer to you. It’s going to make such a world of a difference.”
Quality lighting has also become a lot more accessible to creators, Grams points out. “The whole world has changed with what you can do now with LED lighting,” he says, “and you don’t have to spend thousands of dollars, you can do it for just one or two hundred.”
The Rise of Virtual Production Tools
Virtual production has moved well beyond the pie-in-the-sky realm of The Mandalorian, Grams, Louderback and Raskin agree, with an expanding toolkit now available to creators at all levels.
“If you use TikTok…that’s basically virtual production,” Louderback points out, explaining how the platform’s built-in filters allow users to employ a variety of backgrounds for their content, including news stories. “It’s amazing,” he says of how widespread this type of virtual production technology has become. “And the creators using it on TikTok are doing amazing things with it.”
Our phones, says Raskin, have a range of built-in effects such as focus pullers and color correction that should be explored. “It’s worth thinking of your phone as a camera, and really immersing yourself in that first, and then thinking about a path that you want to grow,” she counsels. “I’m using an external camera now, not a webcam, because I find that picture clearer and just more vivid. And so you will start to see when a tool gets limited and move on to the next one.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Grams shares his enthusiasm for multi-camera setups and AI-generated backgrounds, which have revolutionized the way content can be produced. “It’s changed the way that video can be produced, which I think it’s just so much fun,” he says, highlighting the creative possibilities these tools unlock.
“For me, having a wide shot and a tight shot is a really fun and interesting way to add a level of production to streams that I’m doing myself, and just being able to switch between them with some intention,” he details.
Using an overhead camera for product demos to enable picture-within-picture formats adds even more value, says Grams. Real-time workflows can also streamline collaboration and reduce the need for editing. “Treating it like a live production, even if you’re not streaming somewhere, to be able to edit in real time at the push of a button allows you to be more efficient.”
The Future of Studio Spaces
Studio spaces are evolving as the creator economy continues to expand. To keep up with trends, says Louderback, “follow some of the top creators who are out there, who are doing it, take a look at what they’re doing, and take their advice.”
Raskin highlights the rise of pop-up studios, which offer creators the flexibility to produce high-quality content without the need for significant upfront investment. “You are seeing studios pop up everywhere,” she observes, pointing to a trend that supports the growth of the creator economy by making professional-grade facilities more accessible.
As an example of the versatility of pop-up studios, Grams recounts a recent Willy Wonky parody created for an event using a pop-up facility in St. Paul, Minnesota. “it’s not quite a giant 360 at all, but it’s still a massive screen, and for a much, much more affordable price,” he says.
The technology has completely transformed traditional video production, he says, describing how his crew was able to complete shooting 10 scenes with different backdrops in just eight hours. “We were able to do that, very, very quickly… using AI-generated backgrounds and making changes in real time,” he says.
“That would have just been unheard of [before]. For the size project that it was, we were still able to accomplish a lot using that style of production. But even scaled way down it’s still is changing the way that video can be produced, which I think is just so much fun.”
Going beyond the gear, the most important thing to learn, says Grams, is how to show up and get comfortable, both behind and in front of the camera. “You can do that with your phone and no lighting,” he says. “You can start practicing with whatever you have. Don’t use not having the right gear as an excuse to stop you from starting.”
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 28, 2024
Posted
March 28, 2024
Navigating the New Era of Social Commerce: Creator, Organic and Paid Content Strategies
TL;DR
The social commerce landscape has evolved from prioritizing follower counts to valuing the intrinsic quality of content, necessitating a strategic overhaul in social media approaches.
Dash Hudson’s 2024 social media trends report, “The Next Phase of Creator, Organic and Paid,” categorizes content into Creator, Organic, and Paid, each excelling in specific areas — Creator for engagement, Organic for community building, and Paid for extending reach.
Strategically boosting content, especially Reels for impressions and static posts for engagement, significantly enhances brand visibility and audience engagement.
The integration of generative AI across platforms like TikTok, Meta, and YouTube is transforming brand engagement strategies, offering personalized and interactive user experiences.
TikTok Shop is redefining social commerce, combining sales potential with social engagement in a democratized marketplace, underscored by its rapid growth as a major e-commerce player.
A seismic shift has occurred in the ever-evolving landscape of social media, moving us from a world where follower counts reign supreme to one where the content’s intrinsic value dictates its success. This transformation is a complete overhaul of how brands, creators, and marketers approach social media strategy.
Social media management platform Dash Hudson examines this shift in its 2024 social media trends report, “The Next Phase of Creator, Organic and Paid.” The report dissects current trends and offers a roadmap for leveraging the unique strengths of Creator, Organic and Paid content to forge a path to increased engagement and brand growth.
“In recent years, a new set of rules have emerged,” the report notes. “Short-form video has taken over, and there’s been a significant shift from socially-driven to content-driven feeds, as platforms deemphasize follower counts in favor of the popularity of posts.”
At the same time, audiences are becoming increasingly niche in the pursuit of their personal interests. This means that the traditional one-size-fits-all approach content creation is fading into obsolescence, replaced by content that speaks directly to highly specific demographics, maximizing engagement and ROI in the process.
Creator, Organic and Paid Each Excel
The report divides content into three distinct pillars — Creator, Organic, and Paid — and explains how each pillar excels in its domain. Creator content, with its authentic voice, drives engagement; Organic content builds community through genuine interaction; and Paid, or boosted, content extends reach beyond traditional boundaries, ensuring that messages penetrate the noise of crowded feeds.
Creators reach niche audiences through existing community relationships while generating more engagement than both paid and organic. On average, brands had seven creators partnerships in 2023, with most creators posting roughly eight pieces of content for each brand partner, the report found. But, most importantly, content posted by creators generated 16 times more engagement than content posted by brands themselves.
Organic content provides a regular cadence of fresh content to build brand loyalty and maintain an engaged community, serving as a good indicator of what resonates with a given audience. Nearly 40% of organic content is static, while 23% are carousels and 38% are Reels, and brands tend to post an average number of 11 pieces of organic content each week.
Paid, or boosted, content enables brands to get already high-performing content in front of highly targeted audiences. Boosted posts earn significantly more impressions than creator and organic content, highlighting its vital role in building brand awareness. Reels are the most common format of boosted content, at 50%, followed by Static (32%) and Carousel (18%). Around 9% of brands boost an average of one in every five posts, the report found. Some brands are even boosting up to 70% of their posts, but the brands that boost the highest percentage of content tend to have smaller followings.
Each of these pillars, Dash Hudson argues, “is even more impactful when working in concert with the others as part of a holistic social media strategy.”
This cross-pollination, the report finds, can drive meaningful results. “IAB tracked over 1,000 consumer purchase journeys, finding that advertising alongside creator content can accelerate the purchase funnel, showing a greater impact on building brand loyalty and a 1.3x greater impact on inspiring brand advocacy.”
A Synergistic Approach
More than ever, a siloed approach to content strategy is a recipe for mediocrity. The Dash Hudson report advocates for a synergistic model where “the interplay between Creator, Organic, and Paid content is not just encouraged but essential.” This holistic strategy amplifies brand presence, weaving a narrative that resonates across all platforms and demographics.
Dash Hudson conducted an analysis on the performance of creator, organic, and paid content across a variety of metrics to discern each content type’s strengths and their most effective roles within the content lifecycle.
Creator content shines in engaging niche audiences and boosting interaction rates, achieving a +34% higher engagement rate than organic content and a staggering +316% higher rate than paid content. This underscores its potency in fostering deep connections and stimulating audience participation.
Organic content, on the other hand, is pivotal in cultivating brand loyalty and nurturing a vibrant community. It outperforms paid content with +358% more comments and +104% more likes, and also surpasses creator content with +53% more comments and +18% more likes, highlighting its value in sustaining active and engaged user interactions.
Paid content is instrumental in expanding brand visibility, delivering three times more impressions and six times more video views than organic content, irrespective of the investment size. When compared to creator content, paid content generates seven times more impressions and video views, showcasing its unparalleled capacity to broaden reach and attract new audiences.
The power of paid content cannot be overstated. The report highlights how “strategically boosting content — particularly Reels for impressions and static posts for engagement — can significantly elevate a brand’s visibility.” Moreover, “entertaining content, when boosted, sees exponential gains,” underscoring the importance of not just what you share, but how it captivates your audience.
Overall, boosting content builds brand awareness, “breaking through the algorithms to place your brand directly in front of the eyes that matter most.”
Boosting Reels grows impressions, and boosting static content grows engagement rate — “the metrics each format craves, tailored to maximize the inherent strengths of each content type.”
Boosting entertaining content drives much higher performance across the board, proving that “entertainment is not just king but the ace in the deck for social media strategy.”
Leveraging AI for Content Optimization
Dash Hudson outlines significant trends and developments in social media platforms themselves driven by new technologies and changing user behaviors. The report highlights three key shifts: the integration of generative AI into platform experiences, the resurgence of social commerce with a focus on TikTok, and the growth of direct messaging as a crucial engagement tool. It notes that 64% of marketers currently utilize AI, recognizing its value and planning continued investment.
Specific AI enhancements across platforms are transforming how brands engage with audiences, Dash Hudson finds. TikTok has introduced an AI-powered “Creative Assistant” designed to aid in campaign creation, complemented by custom AI Chatbot Creation tools developed by its owner, Bytedance.
Meanwhile, Meta is unveiling generative AI tools that facilitate video and photo creation/editing from text prompts, alongside expanding AI chat personas to include celebrities like Kendall Jenner and Tom Brady, and is exploring new chatbot creation tools. This is part of Meta’s broader strategy to integrate AI across its advertising products.
Instagram is in the process of testing generative AI features, including custom sticker creation and visual editing tools for uploaded content. YouTube has launched “Dream Track,” an experimental generative AI tool that enables users to create music in the style of various famous artists.
These advancements underscore a significant shift towards more interactive and personalized user experiences, offering brands novel ways to capture attention and stay ahead in the ever-evolving social media landscape.
TikTok Shop: Winner Takes Alland Anyone Can Win
In the dynamic realm of social commerce, TikTok Shop emerges as a groundbreaking platform, blending the immense potential for sales with the power of social engagement. The Dash Hudson report illuminates this platform as “a democratized marketplace where authenticity and entertainment are paramount for direct sales success.”
A standout revelation from the report is TikTok Shop’s meteoric rise in the e-commerce domain, “ranking as the 12th largest e-commerce retailer in the US market and the 5th largest in the UK market in 2023.” This rapid ascent highlights TikTok Shop’s substantial impact and its capability to captivate both users and brands, solidifying its position as a formidable force in e-commerce.
Insights into consumer behavior reveal how TikTok Shop’s unique integration of shoppable videos and livestreams significantly influences purchasing decisions. The platform’s innovative approach to social commerce sales, supported by detailed audience buying behaviors and sales metrics, offers brands a clear blueprint for engaging potential customers effectively.
Technological innovations within TikTok Shop, such as AI-driven personalization and AR features for virtual try-ons, are enhancing the shopping experience, making it more interactive and personalized. These advancements are pivotal in attracting and retaining users, offering them a seamless and engaging shopping journey.
Looking ahead, the report provides a glimpse into the future of TikTok Shop and the broader landscape of social commerce. As we navigate this new era of social media, the fusion of creativity, strategy, and technology becomes increasingly crucial. This report not only sheds light on the current trends and strategies but also offers a glimpse into the future of social media marketing — a future where engaging content, powered by sophisticated AI tools and platforms like TikTok, leads the way.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Driven by creator-led content, social commerce stands out as one of the most significant trends for marketers to watch out for in 2024.
March 28, 2024
Posted
March 28, 2024
AI Is Changing Advertising in All the Ways
TL;DR
The death of cookies presents challenges for the industry, but deep learning algorithms can still deliver strong performance by leveraging other signals and data.
Understanding the data you’re using to create or fine tune AI models and processes isvery important, so owning proprietary data is inevitably a huge advantage.
AI will drive inefficiencies from the advertising process from the creative to analysis of campaign performance but the technologists says this won’t lead to a loss of jobs.
Brands and agencies are excited about the potential of AI to bring mass personalization to advertising campaigns, but will need the assistance of experts to help them make the most of their data.
“If you consider that large agencies, media companies and holding companies [own] a huge amount of proprietary data about markets, audiences and campaign history [it is] going to enable them to create very powerful AI models,” said Jamie Allan, director of business development, global agencies and advertising at chip maker NVIDIA.
After a year of pilots and tentative AI activity in the world of advertising, Allan said that in 2024 we will see agencies asking the question, “How do you connect the power of generative AI with the power of data?”
Speaking with Little Black Book, Allan said that 2024 is “the year of platform and production,” telling LBB’s Ben Conway that it’s “true table stakes” for every enterprise in the world.
“It needs to not be the product anymore,” said Allan. “AI isn’t ‘the thing’ — it’s how AI helps us create new things.”
WPP, Publicis, Dentsu, Omnicon and Media Monks are among agency groups investing in AI trained on centralized pools of data to build mass targeted ad strategies for the post-cookie era.
“The idea of personalization-at-scale, from content production, and using proprietary data to create privacy-first personalization that can bring an era of a more attractive, dynamic one-to-one advertising,” Allan said. “Many years of research have shown that it can drive better brand engagement and growth, and improve return on ad spend.”
It is in NVIDIA’s interest to be talking about this since its chips are being sold into agency groups to supercharge the crunching of data.
“Understanding the data you’re using to create or fine tune AI models and processes is very important, so having your own proprietary data is inevitably a huge advantage,” he said.
“The quicker the business models can be adapted to the impact of AI, the more successful companies will be as well,” added Allan. “That’s something agencies have the opportunity to help guide brands on, once they become experts in that business transformation.”
He insisted that GenAI is not about the generation of content, but the generation of intelligence. “The quality of that intelligence is based on the data, the sources and the teams building those models and pipelines,” he said.
“If you are generating and owning data, then you should own the intelligence that that data is going to produce as well. And you should have the capability to generate that intelligence.”
Marla Kaplowitz, CEO of agency advocacy group 4As, recently stated, “GenAI is here to stay, leaving the advertising industry with a stark choice: adapt or become irrelevant.”
As 4As SVP of creative technologies and innovation Jeremy Lockhorn wrote in Fast Company, “agencies must embrace the opportunity to transform their revenue model.”
Next in Media talked with Cognitiv CEO Jeremy Fain about what ad industry execs really need to understand about the difference between LLMs, deep learning and Computer Vision.
According to Fain, deep learning is a powerful tool in performance advertising, allowing for more efficient and effective targeting of impressions.
“Transparency and customization are key factors in successful media buying, and deep learning can provide insights and analytics to support these efforts,” Fain says.
His company applies AI, in the form of deep learning applications and technologies, to predict consumer behavior and self-drive full-funnel marketing performance at scale.
“If you rely on third-party cookies to message your customers, you could be missing out an 80% of the people you want to reach,” Fain said.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
AI will drive inefficiencies from the advertising process from the creative to analysis of campaign performance but the technologists says this won’t lead to a loss of jobs.
Allan said, “Jobs will be augmented and supercharged, especially in the creative side. The best in the industry are looking at these tools and setting out very flexible strategies about their creative pipelines — how they can integrate multiple tools and not be set in a single creative process.”
Fain said, “I think the roles will change but I don’t think that the number of people employed at agencies will necessarily materially change over the long term.
A recent Goldman Sachs study suggests that, in the next 10 years, most jobs will be complemented by AI, not substituted by it.
“If we let it, and get it right, we can use generative AI to tell more compelling stories, connect with audiences on a deeper level, and usher in a new era of advertising that is both effective and meaningful,” said Lockhorn.
The convergence of linear television and streaming is redefining advertising planning into a cross-media marketplace.
March 27, 2024
Court Is in Session: The People vs. AI vs. Creativity
Get ready to witness the trial of the century as AI faces charges for Conspiracy to Conformity in Content Creation.
Court will convene Tuesday, April 16 at 2 p.m. at NAB Show in Las Vegas. (The session is open to everyone; register here with code AMP05 to attend for free.)
Brace yourselves as our silicon-based pal stands accused of masterminding the heist of creativity, leaving the digital realm drowning in a sea of yawn-inducing, cookie-cutter content!
Come watch as the prosecution attempts to pin the blame on our binary buddy for aiding and abetting creators in unleashing an epidemic of slapdash mediocrity.
The defense will no doubt counter with arguments that AI is just a virtual scapegoat innocently crunching numbers while humans take all the creative credit.
Can AI clear its cache and walk away with nothing but a circuitry slap on the wrist? Join us and hear our expert witnesses; their testimony will likely provide more twists and turns than a glitchy algorithm on a rollercoaster ride.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Generative AI uses very powerful machine learning methods such as deep learning and transfer learning on such vast repositories of data to understand the relationships among those pieces of data — for instance, which words tend to follow other words. This allows generative AI to perform a broad range of tasks that can mimic cognition and reasoning.
One problem is that output from an AI tool can be very similar to copyright-protected materials. Leaving aside how generative models are trained, the challenge that widespread use of generative AI poses is how individuals and companies could be held liable when generative AI outputs infringe on copyright protections.
When Prompts Result in Copyright Violations
Researchers and journalists have raised the possibility that through selective prompting strategies, people can end up creating text, images or video that violates copyright law. Typically, generative AI tools output an image, text or video but do not provide any warning about potential infringement. This raises the question of how to ensure that users of generative AI tools do not unknowingly end up infringing copyright protection.
The legal argument advanced by generative AI companies is that AI trained on copyrighted works is not an infringement of copyright since these models are not copying the training data; rather, they are designed to learn the associations between the elements of writings and images like words and pixels. AI companies, including Stability AI, maker of image generator Stable Diffusion, contend that output images provided in response to a particular text prompt is not likely to be a close match for any specific image in the training data.
Establishing infringement requires detecting a close resemblance between expressive elements of a stylistically similar work and original expression in particular works by that artist. Researchers have shown that methods such as training data extraction attacks, which involve selective prompting strategies, and extractable memorization, which tricks generative AI systems into revealing training data, can recover individual training examples ranging from photographs of individuals to trademarked company logos.
Legal scholars have dubbed the challenge in developing guardrails against copyright infringement into AI tools the “Snoopy problem.” The more a copyrighted work is protecting a likeness — for example, the cartoon character Snoopy – the more likely it is a generative AI tool will copy it compared to copying a specific image.
With respect to model training, AI researchers have suggested methods for making generative AI models unlearncopyrighted data. Some AI companies such as Anthropic have announced pledges to not use data produced by their customers to train advanced models such as Anthropic’s large language model Claude. Methods for AI safety such as red teaming — attempts to force AI tools to misbehave — or ensuring that the model training process reduces the similarity between the outputs of generative AI and copyrighted material may help as well.
Role for Regulation
Human creators know to decline requests to produce content that violates copyright. Can AI companies build similar guardrails into generative AI?
Given that naive users can’t be expected to learn and follow best practices to avoid infringing copyrighted material, there are roles for policymakers and regulation. It may take a combination of legal and regulatory guidelines to ensure best practices for copyright safety.
For example, companies that build generative AI models could use filtering or restrict model outputs to limit copyright infringement. Similarly, regulatory intervention may be necessary to ensure that builders of generative AI models build datasets and train models in ways that reduce the risk that the output of their products infringe creators’ copyrights.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Adobe CEO Shantanu Narayen discusses how the company is incorporating AI to assist creators and its work to tackle misinformation.
March 27, 2024
Posted
March 27, 2024
GenAI Is Good for Artists, So What’s the Problem?
Video created by sculptor/artist Alex Reben using OpenAI’s Sora
TL;DR
OpenAI executive Peter Dengexplored the role of humans in the age of AI in a provocative talk at SXSW.
He defends the company’s accelerated release of AI tools and its ongoing development of AGI as being for the benefit of humanity.
Asked directly whether creators should be compensated for the use of their work as AI training data, Deng as good as says no.
OpenAI is voluble about its mission to deliver all the benefits of AI to humanity, but is non-committal at best on whether it should be paying creators for the work its machines are trained on.
Quizzed on this, Peter Deng, OpenAI’s VP of consumer product and head of ChatGPT, told SXSW, “I believe that that artists need to be a part of that ecosystem, as much as possible. The exact mechanics I’m just not an expert in. But I also believe that if we can find a way to make that flywheel of creating art faster, I think we’ll have kind of really helped help the industry out a bit more.”
Generative AI then should be viewed as a definite plus to the creative community and they should all be thankful for it for speeding up their process, and quit moaning about being compensated.
Asked directly whether artists deserve compensation, Deng avoids a direct response.
“How would I feel if my art was used as inspiration [for an AI]? I don’t know,” he said. “I would have to ask more artists. I think that, in a sense every artists has been inspired by some artists that have come before them. And I wonder how much of that will just be accelerated by AI.”
Nothing to see here then, creative community. Move along.
Deng’s main message in the provocative hour long moderated debate was that AI and humanity are going to “co-evolve,” so get used to it.
“I actually believe AI fundamentally makes us more human,” Deng declared. “It’s a really powerful tool, it unlocks the ability for us to go deeper and explore some of the things that we’re wondering about,” he said.
“Fundamentally, our minds are curious and what AI does is lets us go deeper and ask those questions.”
In his example, someone learning about Shakespeare might struggle to get past the language or understand the play’s context. But they could boost their appreciation of the text by quizzing an AI.
In a similar way Deng imagines everyone having a personal AI that they could interact with for any number of reasons such as bouncing around ideas, problem solving or answering questions.
In this sense AI is an evolution of a printed encyclopedia, of Wikipedia or a internet search engine.
“We are shifting in our role from being the answers and the creators to more of the questioners and the curators,” he said. “But I don’t think it’s a bad thing. If you take a step back, what’s really interesting about AI is that it gives us this tool, this new primitive that we can start to build on top of.”
The calculators is another analogy. Instead of spending time doing arithmetic, we can now think about higher level mathematical problems. Instead of spending time recalling every single fact, we have Google or databases where knowledge resides allowing us to ask higher level questions.
“The level of skill that humanity has just keeps on getting pushed up and up and up with every sort of big technology. Since AI is such a foundational technology we’re going to be able to push our skill level up and up.”
Kids, he suggests, could use AI to program, learning how to code even before they learn how to write.
You can’t rely argue with this sort of vague and optimistic approach to AI. It’s Deng’s job, after all, to promote OpenAI’s development.
He goes on to talk about the how the mission of the company inspired him to join it from his previous role at Meta. He claims to want to help create “safe” artificial general intelligence, or AGI, which is the next level of the technology that OpenAI is working on. He wants to “distribute the benefits to all of humanity.”
Deng said, “I’ve never seen a technology in my lifetime that’s this powerful, that has this much promise. Just to be a part of something that’s going to be so beneficial to humanity if we get it right. And I just want to not mess it up.”
However, interviewer Josh Constine, the former editor at large of TechCrunch and now a venture partner at early stage VC firm Signal Fire, is no fool. He does ask the probing questions of Deng. Such as whether bias in training data sets are a concern and what is OpenAI going to do about it.
Deng essentially says it’s up to the user to decide, seemingly absolving OpenAI of responsibility
“My ideal is that that AI can take on the shape of the values of each individual that’s using it. I don’t think it should be prescriptive in any such way.”
Constine tries to get Deng to agree that giving AI a standard set of ethical values must be a good thing for all of mankind, not just an AI which is super intelligent but one which is “empathetic.”
Deng ducks the topic with more platitudes. “The beautiful part of humanity are different parts of the world have different cultures and different people have different values. So it’s not about my values that I want to instill, I would just hope that we’re able to find some way to take the world’s values and instill it.”
Later in the interview he gives this revised approach: “How do we find ways to instill the values that we have and [impart that] learning to AI so that AI can kind of be a part of our coevolution?”
Would Deng trust an AI to defend him were he theoretically in court?
“[If] I were ever to be falsely accused of a crime I would absolutely like to have AI as a part of my legal team. One hundred percent.” AI would act as a an assistant to the legal counsel “just listening to the testimony and in real-time, cross-checking the facts and the timelines, being able to look at all the case law and the precedent, and to suggest a question to a human attorney. I think there’s absolutely human judgment involved. But that level of sort of super power assistant is going to be really powerful.”
That said, Deng wouldn’t yet trust AI for everything. Just as one might use the autonomous functions of your car, it will take to build up trust in the machine. A key part of the evolution for Deng and OpenAI is real-world learning. OpenAI argue that the reason they release ChatGPT and other large language models into the world is to test and trial and adapt and improve them with constant iteration outside of a lab. Deng argues this makes the AI better for humans in the long run.
“I think that the path of how we get there, the repeated exposures and experiencing of it is a huge part of the coevolution. We’re not developing AI and keeping it in the lab. We’re trying to making it generally accessible to other people, so that people can try it out and can gain that literacy, and can get a feeling for what this technology can do for you.”
Literacy or education about how to use and work with AI and its potential threats, weaknesses and strengths is, he says, very important. He advocates education schemes that do this and says OpenAI and its investors at Microsoft are already paying for some of these programs.
One way to ensure AI remains a tool for mass use and mass literacy is to make it free. Deng commits to the idea that a version of OpenAI will always be free.
“There should be there should always be a free version. Absolutely. That’s part of our mission — to distribute the benefits to all of humanity. It just so happens that it costs a lot to serve right now.”
He says enterprise users are paying to use OpenAI tools at a price “commensurate with their use,” but some of that value is able to trickle down.
OpenAI wants to push the boundaries of the tech, “but also make sure that we’re developing it in a very safe way,” he claims. “And the way that we build product on the inside is very much a combination of multiple people with multiple different perspectives on what could be.”
Pushed on whether there is a threat from deepfakes and other AI generated information in this election year, Deng agrees that it is does matter. He points to OpenAI’s support of content credentialled initiatives like C2PA. But will this matter in the longer term? He is not so sure.
“In the future, I don’t know if people will care,” he said. “Walking down the street here in Austin, I’m not sure how much we care that a billboard ad was created using Photoshop or not. Or indeed what tools were used to create that content. I don’t know how people will care [about AI generated content] in future but I do know that if people will care, then it will be corrected for.”
In other words, let the market decide.
Having warmed his subject up with some easy lobs, Constine gets down to the meat of questioning. Where does Deng stand on how fast AI development from OpenAI and others should be? Should AI development be slowed in order for all its implications on society and industry — and regulatory guardrails — to catch up?
“I’m somewhere in the middle. With any new technology, there’s going to be really positive use cases and there’s some things that we need to really consider. My personal viewpoint is the way that we actually figure out what those challenges are and how we actually solve them is to release at a responsible rate in a way that gives society a chance to absorb and make sure we have the right safeguards in place.”
He adds, “I don’t think that AI will be safely developed in a lab by itself without access to the outside world. Companies are not going to be able to learn how people want to use it, where all the good is, and also what are all the areas that we need to be very cautious about [without release in the wild].”
Constine probes; If an AI makes a mistake, who is responsible? Should that AI model be changed or pulled back? Should the engineer be held liable? Should the company?
Deng reiterates that releasing product is the best way of seeing the good and the bad.
“AI will make mistakes, but it’s important that we release it so that the mistakes that are made are ones we’ve already baked in some of the mitigations [safety features]. That iterative deployment is my best bet of how we can kind of advance this technology safely.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Jamil will address the importance of authenticity for content creators at 3 p.m. (PT) on Tuesday, April 16 during a conversation with Really Famous host Kara Mayer. You can register to attend for free with code AMP05.
Here, she answers questions from NAB Amplify’s Emily M. Reigart about showing up on a plethora of platforms, breaking through in the Creator Economy, the impact of AI on self-image and her work in podcasting.
Tell us about your social media philosophy. How do you approach creating content and balance the need to “show up” on so many different platforms?
I make and post content that I would want to watch. That way I create a community online of people who have similar interests to me. This makes it far less draining to be consistent. When you’re not being yourself, it is far more taxing.
Regarding showing up across all the platforms, I mostly am able to use the same content, so it’s a low lift. These apps all have such different users and audiences that it has not drawn fatigue from my followers yet!
What advice would you give to someone who wants to work in the Creator Economy?
Find something sustainable and honest. People are sick of over-produced, empty content. This is constant work; it requires planning, thought and vulnerability. Do not fake it, or it will feel like swimming upstream all the time.
Also ask your audience what they like and don’t like. Doing so for me, has helped me grow my podcast, and my online following, because it makes my page feel like a community, rather than a stage just for me. I thrive on honest feedback.
You’ve long been outspoken about the impact of edited photos and Facetuned social media. Now, with the popularization of generative AI tools, we’re facing new conversations around content authentication and personal authenticity. How are you thinking about this?
I’m just terrified of it all. Terrified that now people won’t know that they’re comparing themselves to literal AI digital perfection. This is a crisis beyond our understanding. It also opens the doors for so much hostility online, rumors, deep fake incriminating videos, revenge pornography. It’s a nightmare.
You have recorded more than 200 episodes of your podcast, “I Weigh.” What have you learned from working on this project consistently for four years?
I have learned that I have so much still to learn!!!
The arrogance of our current society when we look down on others who are openly learning is something that really concerns me. There are so many fascinating subjects and outlooks on this planet. We cannot possibly expect to know it all. Learning is so much fun and can be such a bonding communal experience.
This podcast has repeatedly blown my mind and expanded my horizons. I’m a more tolerant, humane, humble and thoughtful person for it.
Does your background as a presenter and actor impact how you function as a podcast host?
Absolutely, because my job is to zoom out and think about the show at large. Not just myself. I am constantly producing in my head while I’m interviewing. I’m carving a full picture of someone with my questions, which makes for hopefully a multi-dimensional and thorough conversation.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Casey Neistat, digital creator and cofounder of BeMe, is joining NAB Show this year for “Do What You Can’t with Casey Neistat.” You can register to attend for free with […]
March 26, 2024
London Calling: The Complex Production on “Criminal Record”
TL;DR
French cinematographer Laurent Barès discusses the lure of a London shoot and the challenges of bringing a complex narrative tothe small screen.
One challenge was to realistically portray working-classneighborhoods and depict a London off the beaten track.
Barès explains why he dislikes formulaic camera work, saying it feels like you’re not telling a story but “shopping for the edit.”
Like any good DP, Barès devised a visual grammar to fit the story, and shot the series on the ARRI Alexa Mini LF equipped with Zeiss Supreme FF lenses.
Peter Capaldi and Cush Jumbo star as detectives drawn together by an anonymous phone call to right an old miscarriage of justice in Apple TV+’s Criminal Record.
Written by BAFTA nominee Paul Rutman and directed by Jim Loach, the series touches on issues of race, institutional failure and the quest to find common ground in a polarized Britain.
“I loved the complexity of this narrative,” French DP Laurent Barès (Gangs of London) informed British Cinematographer. “It’s a real challenge to convey this to the audience without them feeling lost. Too many shows today are simplistic, obvious. Life isn’t like that. Criminal Record is a good reflection of the complexity of our lives.”
The Frenchman says he loves London and this helped him portray a different side to the city than tourist cliches.
“There’s a significant character in Criminal Record that irresistibly attracted me – London. A multicultural, immense city. I love London. I’ve been fortunate to spend several months there because of my profession.”
During research, Barès discovered the work of British photographer Ray Knox, whose color photos of London seemed close to the universe of Criminal Record.
“He captured a modest London, far from tourist spots. The light guides his graphic composition. I also [draw on] photos from each of our location scouts. It was important to choose locations that, in some way, offered a perspective on the city.”
For instance, a lengthy discussion between Hegarty (Capaldi) and DS Cardwell (Shaun Dooley), is set in a bar with large windows. “Behind them, you constantly feel the hustle and bustle of the street, adding an extra dimension to their conversation.
“When DS Lenker (Jumbo) talks with a phone seller, Hasad (Sia Alipour), we moved his stand a meter onto the pavement. This way, for the two-shot, you can see the perspective of Kingsland Road.
A related challenge of Criminal Record was to realistically portray working-class neighborhoods.
“I dislike miserablism,” Barès says. “We strived to maintain a balance between reality and poetry. I drew inspiration from Don McCullin’s photos of Liverpool in the late 1960s — beautiful, realistic, moving, and respectful. The framing is slightly distanced enough to understand where we are but not so much as to ignore the drama of those who live there.”
He shot the show on the ARRI Alexa Mini LF equipped with Zeiss Supremes FF lenses, and, as any good DP will do, devised a visual grammar to fit the story.
“Filming an investigation is capturing a thought in motion,” he says. “In every investigation, there is progress, mistakes, setbacks, dead-ends, and successes — all of which evoke camera movements. The approach shouldn’t be illustrative but attentive.”
He says he didn’t want a didactic approach to camera such as opening a scene with a wide and a forward tracking shot, then shot/reverse-shot during dialogue, and a few inserts for editing convenience.
“When you do that, it feels like you’re not telling a story but shopping for the edit. It’s not creative; it’s purely technical. Paul Rutman’s text deserved much better. It alternates action and investigation scenes with their consequences on the characters’ daily lives. There was no way to film them the same way.”
There is camera movement in introspective scenes — such as slow tracking shots accompanying the characters’ contemplation. This helps create an intimacy between the viewer and the characters.
“Filming this show required a lot of sensitivity. There’s no replicable model. Each actor, each scene is different.”
Barès follows the French filmmaking tradition in declaring a hatred for aesthetics for the sake of aesthetics. “Framing, composition only exist if they tell the story,” he declares. “This doesn’t exclude elegance and beauty, but there must be an alignment. Each project dictates its own aesthetics.
“I keep an eye on the second units. I don’t want a Terry Gilliam shot in the middle of a Michael Mann film, or vice versa. Each in its own style. What matters is the coherence from the first to the last shot.”
This consistency of image across the story applies to his work in the grade too. In this case, the colorist is Anthony Daniel (All Quiet on the Western Front).
He talks about his work on this project and approach to colorist collaboration in general during the Frame & Reference podcast, hosted by Kenny McMillan.
“Memories from the shoot help me explain what I want,” he said. “Weather conditions, the sun’s position and so on. I always remind my colorist of the shooting conditions. I don’t understand why sometimes DoPs are asked to work on grading remotely via video from their homes. Physical presence seems indispensable [to create the best work]. Thanks to my producers for respecting that.”
In the podcast, Barès discusses his experience attending a prestigious film school in France, highlights the challenges of entering the industry, including the need to learn and expresses frustration with film students’ lack of attention to storytelling and photography.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Showrunner, writer and director Issa López discusses her approach to the new season of anthology series “True Detective: Night Country.”
March 26, 2024
The Fragmentation Situation: What Do Today’s Streaming Audiences Want?
TL;DR
SVOD price hikes may be approaching their peak: Consumers will likely balance costs and content with ad-supported tiers, contracts, and more bundles, but these may be short-term solutions to preserving profitability for streamers.
Social media and unbundling pay TV have trained consumers to expect more customized and personalized content and advertising. Deloitte suggest these are levers to build greater consumer engagement and value.
Thebiggest challenge for SVOD providers and studios is that they are no longer addressing a mass culture, but a fragmented landscape of competing digital entertainment options.
More evidence if it were needed that the video streaming model is shape-shifting under its own weight forcing players to adapt a much more sophisticated approach to market.
In its analysis from November 2023, Omdia found the number of SVOD services per home has declined in a number of markets for the first time. Market analyst Antenna in its latest “State of Subscriptions” report also finds that subscriber growth among Premium SVODs slowed last year to 10%. Deloitte in its “2024 Digital Media Trends” report found more than a third of Americans no longer think subscription VOD is worth the price they are paying.
On average, Deloitte says, US households spend $61 per month on streaming services. That’s a 27% increase over last year’s average of $48 per month. And streaming services might want to think twice before increasing prices further, as nearly half (48%) of the people Deloitte spoke with said they would cancel their streaming service — even their favorite one — if prices went up by $5 per month or more.
“With 36% of Americans surveyed believing content on SVOD isn’t worth the money, providers shouldn’t assume that advertising, bundles, and contracts are enough to help their business,” said Deloitte.
Its survey data shows that US consumers are questioning the value of streaming media while also declaring their unwillingness to ever pay for social media.
“This is a generational shift,” the report stated. With some eldest millennials in their 40s, “it’s no longer merely ‘younger generations’ who are giving their time equally to TV and movies, social media and user-generated content, and immersive and social gaming.”
Cancellations are already a problem for the industry, notes Chris Morris, analyzing the Deloitte report at Fast Company. Deloitte reports that 40% of consumers have cancelled a streaming service in the past six months.
Antenna found that churn had tripled in the last four years, pressuring net additions and growth overall. It also identified a category of “Serial Churners” — individuals who have three or more cancellations of a premium SVOD service in the past two years. That segment now comprises nearly a quarter of users.
Antenna attributes the overall increase in churn to the surge in mergers and acquisitions among the major streamers since 2019. Almost half of Premium SVOD Subscriptions (excluding Netflix) are in their first year of tenure, it notes.
On the plus side, 10% of cancellations resubscribe the next month, and one in three are back by six months after cancelling.
Antenna concludes that if the previous stage of the streamer business model was focused on acquisition to amass scale, the next stage necessitates a shift to managing their subscribers.
“This will translate to much more sophisticated marketing and product strategies, new success KPIs, and a whole lot more reliance on data,” says Antenna, which of course can deliver all of this.
Part of the problem is that viewer’s time is being more and more fragmented away from TV, away from streaming TV and onto social media sites and video games.
“The biggest challenge for SVOD providers and studios may be that they are no longer addressing a mass culture, but rather a fragmented landscape of competing digital entertainment options,” Deloitte execs state. “Trying to rebuild pay TV business models around streaming services could help reduce SVOD churn and slow attrition in the near term, but the long game for success will likely involve reinventing the medium to be more personalized, more shoppable, and more social.”
Providers will also likely need to widen their scope beyond TV and films to reach modern audiences, it suggests, and make their IP work across social and video games.
“The industry has had 20 years to understand the size and shape of the streaming disruption. Now they should come together to work to build something truly contemporary.”
This would include partnering with social media creators and influencers to facilitate “discovery, hype, and trust,” and using generative AI to improve the quality of content creation. However, Deloitte warns that this could also “lead to a flood of cheap and novel content that further dissolves the boundaries between ‘real’ and synthetic, commodity and premium.”
Simultaneously, free video stacking is still on the rise. YouTube’s continued growth as the top video service provider in key markets, is charted by Omdia. Strong growth in other social video platforms and Free ad-supported television (FAST) services sees free as the major streaming strategy that all major SVOD services are leaning into.
Also in Europe, the legacy of public service broadcasting remains strong, with traditional free TV and broadcaster video on demand (BVOD) services in high demand.
“The allure of social media platforms such as TikTok and Instagram Reels has reshaped how individuals consume video content,” says Omdia analyst Maria Rua Aguete. “The appetite for free content is ever-increasing and the major streamers are clearly leaning into this as a strategy. With engaging formats and vast user bases, social media services offer compelling alternatives to mainstream streaming services.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Available now to download, “A Beginner’s Guide to FAST” will be presented at NAB Show by GRG Global’s VP of Media Research, Gavin Bridge.
March 25, 2024
Posted
March 24, 2024
Don’t Treat AI Like Pandora’s Box, Warns Jaron Lanier
TL;DR
Treat AI like an “entity” that is already actively “intelligent” and you risk the world actually descending into “The Matrix,” says tech guru and Microsoft adviser Jaron Lanier.
Because we have large language models that seem to work in the same way that natural biological neurons do, we have erroneously assigned machine and human to the same category.
There’s no magic in the black box of LLMs, Lanier says. We are in charge and can shape AI any way we want.
If you believe Jaron Lanier, there’s no intelligence in our current AI but we should be scared nonetheless. The renowned computer scientist and virtual reality pioneer is a humanist and says he speaks his own mind even while on the Microsoft payroll.
“The way I interpret it is there’s no AI there. There’s no entity. From my perspective, the right way to think about the LLMs like ChatGPT is, as a collaboration between people. You take in what a bunch of people have done and you combine it in a new way, which is very good at finding correlations. What comes out is a collaboration of those people that is in many ways more useful than previous collaborations.”
Lanier was speaking with Brian Greene as part of “The Big Ideas” series, supported in part by the John Templeton Foundation. He argued that treating AI as “intelligent” gives it an agency it technically does not have while absolving us of our own responsibility to manage it.
“There’s no AI, there’s just the people collaborating in this new way,” he reiterated. “When I think about it that way, I find it much easier to come up with useful applications that will really help society.”
He acknowledges that anthropomorphizing AI is natural when confronted with something we can’t quite comprehend.
At present, because we have large language models that seem to work in the same way that natural biological neurons do, we have assigned both machine and human to the same category. Erroneously in Lanier’s view.
“Perceiving an entity is a matter of faith. If you want to believe your plant is talking to you, you can you know. I’m not going to go and judge you. But this is similar to that like it.”
The risk of not treating AI as a human driven tool is that the dystopian fiction of Terminator will be a self-fulfilling prophesy.
“I have to really emphasize that it’s all about the people. It’s all about humans. And the right question is to assess could humans use this stuff in such a way to bring up about a species threatening calamity? And I think the clear answer is yes,” he says.
“Now, I should say that I think that’s also true of other technologies, and has been true for a while. The truth is that the better we get with technologies, the more responsible we have to be and the less we are beholden to fate,” he continues.
“The power to support a large population means the power to transform the Earth, which means the power to transform the climate, which means the responsibility to take charge for the climate when we didn’t before.
“And there’s no way out of that chain that [doesn’t] lead to greater responsibility.”
Ultimately, the way to prevent The Matrix from ever happening is to frame AI as human responsibility.
“The more we hypothesize that we’re creating aliens who will come and invade, the less we’re taking responsibility for our own stuff.”
Lanier adds, “There are plenty of individuals at Microsoft who wouldn’t accept everything I say. So this is just me. But at any rate, what I can say is that Microsoft and OpenAI and the broader community that does seriously work on guardrails to keep it from being terrible. That’s the reason why nothing terrible has happened so far in the first year and a half of AI.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Tech guru Jaron Lanier argues that we may need to adopt a new perspective to understand the strengths and limitations of AI.
March 24, 2024
How XR and AI Can Deliver True Transmedia Storytelling
TL;DR
Rachel Joy Victor, co-founder of fbrc.ai, emphasizes the shift towards immersive, interactive narratives enabled by AI and XR technologies, offering new opportunities for audience engagement beyond traditional formats.
Drawing on a diverse academic and business background with a focus on computational neuroscience and “Spatial Economics,” Victor has designed multiplatform narratives, immersive experiences, tools and platforms for a broad range of household-name clients.
Victor highlights the role of AI in optimizing asset movement across platforms, with companies like ModTech using machine learning for asset optimization and Playbook XR enabling cross-format creation.
Rachel Joy Victor aims to explore how AI can revolutionize storytelling in the digital age, particularly in terms of content creation efficiency.
“Traditional formats will always have their place, but immersive storytelling offers unique opportunities for audience engagement,” she says. “We’re witnessing a shift towards interactive narratives and spatial experiences, where viewers have agency in shaping the story.”
Victor is a designer, strategist and worldbuilder, working with emergent technologies and mediums (XR, AI and Web3) to create cohesive narrative, brand, and product experiences. At NAB Show, she will be moderating a panel discussion, “Harnessing AI-Driven Storytelling For Efficiencies in Content Creation,” on Monday, April 15 at 4:00 PM in the Capitalize Zone Theater (W2149). The session, which includes Jean-Daniel LeRoy, co-founder and CEO at Playbook XR, Mod Tech Labs CEO Alex Porter, and Emmy-winning immersive director Michaela Ternasky Holland, will focus on generative video workflows and procedural content creation.
She’ll also be conducting show floor tours at NAB Show focused on AI technologies and innovations. These tours will offer attendees a primer on the technical aspects of AI, emerging production workflows, and new content formats focused on the backbone of tooling for new content production pipelines (see below.)
Victor draws on a diverse academic and business background with a focus on computational neuroscience and “Spatial Economics.” Her designs range from multiplatform narratives and immersive experiences to tools and platforms and spaces and cities for clients including Disney, HBO, Vans, Ford, Havas, Meow Wolf, Niantic, and more.
“I’ve always been passionate about understanding human behavior and how it interacts with technology,” she says. “Over the years, I’ve worked on various projects, from creative direction for events like the Dubai World Expo to consulting for major brands like Nike and Crocs. Now, as a co-founder fbrc.ai, my focus is on developing AI-enabled tools for content production.”
She says AI plays a crucial role in optimizing asset movement across different platforms and points to the work of ModTech, a company that utilizes machine learning to optimize assets, ensuring they’re in the right place, at the right time, and in the right format.
“Additionally, tools like Playbook XR facilitate cross-format creation by embedding behaviors into spatial design engines, allowing for seamless adaptation across various mediums,” she says.
“We’re developing a vocabulary for immersive storytelling, leaning into immersion while keeping entry barriers low. For example, this session also welcomes the insight of artist Mikaela Ternasky-Holland, who is pushing the boundaries of immersive storytelling, combining 2D and 3D elements to create captivating experiences.”
Spatial Economics is an increasingly important field which dovetails media with science and entails understanding how spatial factors influence decision-making. For Victor, this is about leveraging real-time spatial data to personalize experiences. “For example, using data from IoT devices at a theme park to guide visitors towards water stations based on their location and environmental conditions,” she says.
With the rise of XR headgear like Apple Vision Pro a new battleground is developing for advertising and data collection around the real estate and sensory signals of a wearer’s face, such as data collected from eye-tracking.
“XR devices offer unprecedented access to personal data, raising concerns about privacy and data ownership,” she says. “It’s crucial to establish robust data policies to protect individuals’ privacy while still enabling immersive experiences.”
A little further out and some commentators predict a merging of our own biology — our neural pathways — for controlling AI-driven computers and experiences.
“It’s a complex topic,” she agrees. “While there’s potential for incredible advancements in brain-computer interfaces, we are also still grappling with fundamental challenges, such as capturing and interpreting neural signals accurately. The portrayal of brain-computer interfaces in the public imagination is often oversimplified. It’s essential to approach these developments cautiously and prioritize ethical considerations.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
One of contemporary art’s most innovative creators, Anadol considers generative AI as his artistic collaborator.
March 24, 2024
Legacy Media and Social Platforms Can Peacefully Coexist (Really!)
TL;DR
Fresh research from Deloitte highlights a growing trend among younger audiences favoring long-form social media content, challenging the notion that Gen Z prefers only short-form videos.
Key insights from Deloitte’s 18th digital media trends survey and “The Creator Economy in 3D” report reveal that over 40% of Gen Z audiences actually prefer content longer than 10 minutes, underscoring the importance of understanding audience preferences in content creation.
The research also emphasizes the significant impact of content creators on brand engagement, with three out of five consumers more likely to interact with brands endorsed by their favorite creators, highlighting the evolving dynamics of advertising and content consumption.
Fresh research from Deloitte shows that younger audiences are increasingly gravitating towards long-form social media content compared with older generations.
An NAB Show session, “A Return to Long-Form: The Convergence of Legacy Entertainment & Social Media,” led by Dennis Ortiz of Deloitte Consulting’s Technology, Media and Telecommunications (TMT) practice unpacks the research and highlights strategies broadcasters should deploy in order to capitalize on this trend. The session will take place at 3:00 PM on Monday, April in room W219.
“The emergence of social platforms is driving a lot more competition for eyeballs but the crux of our debate at NAB is that legacy media and social media can coexist — provided the right strategy is in place,” says Ortiz, who leads all Deloitte’s advertising, publishing and social media platforms practice in the US.
One difference between the two from the consumer’s point of view is that legacy media tends to be more immersive. “It’s more about the experience whereas social media is really viewed as convenient but also driving connection and community. So there are different reasons as to why consumers or viewers gravity towards each.”
Legacy media can and should take advantage of these content consumption trends to effectively compete for the same viewers.
Deloitte will share pointers on this too. Drawing on its latest (18th) Digital Media Trends Survey published in April and the second edition of “The Creator Economy in 3D,” Ortiz shares insight into the intersections of legacy media, social platforms and content creators.
For example, its research suggests that more than 40% of Gen Zs actually prefer long-form over short-form content. “There’s this notion that Gen Z must love short-form because they are on Insta or TikTok but the reality is that they’re actually starting to prefer long-form content (defined as greater than 10 minutes).”
This dichotomy between short- and long-form content is not necessarily a competition. They coexist and serve different entertainment purposes in everyday consumers’ lives.
Ortiz will also highlight what these viewing patterns mean for advertisers and content providers with ad-led services. A key here is the role that creators play in driving connection to brands.
Deloitte research indicates that three out of five consumers are likely to engage with a brand or purchase from a brand if their favorite creator recommends it. That supports the idea of the strength of connection that viewers have with certain creators.
“Advertisers can target audiences with social and reach the mass audience with legacy media,” he says.
Ortiz will dive into this with speaking guests representing both legacy (long form) and social (short-form) and discuss the resulting convergence of legacy entertainment and social media.
His overall prognosis for traditional media is positive. “I don’t think legacy media is going to go away. It will coexist with social. For legacy media it comes to down to changing the strategy by really understanding how consumers use various types of media. With that insight they can create cross platform strategies. An example from our research is that 54 percent of consumers start watching new shows or movies due to recommendations that they get from social media. The relationship between social and legacy is symbiotic relationship.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Deloitte Principal Dennis Ortiz says understanding consumer habits and the underpinnings of the Creator Economy are crucial to brand success.
March 27, 2024
Posted
March 24, 2024
Don’t Put Profit Before Ethics, SMPTE’s Renard Jenkins Warns AI Developers
TL;DR
Renard T. Jenkins, president of the Society of Motion Picture and Television Engineers, is concerned that the proliferation and sophistication of large language models are being embedded with bias, unconscious or otherwise.
That said, bias is not inherently a bad thing, says Jenkins. Erasing misogyny from LLMs would be a good thing, for example.
He calls on companies to ethically source data, to employ a diverse group of decision makers and developers and to educate, educate, educate.
SMPTE president Renard T. Jenkins has flagged concerns about bias in development of AI tools and says diversity among decision makers is a practical means to prevent it.
“We should be fighting against bias in everything that we build and the best way I believe for us to do that is through inclusive innovation,” he told the Curious Refuge Podcast. “If your team looks like the world that you intend to serve, and to develop these tools for, then they’re going to notice when something is biased towards them or towards others.”
Jenkins expressed concern that the proliferation and sophistication of large language models are being inbuilt with bias, unconscious or otherwise.
He suggests that bias is not inherently a bad thing because certain forms of bias are there for our protection.
“As a child you learned very, very early not to put your hand on the stove because it’s hot. You develop a bias towards putting your hand on hot things. That’s good. That helps protects us as human beings. So there is that innate ‘bias’ that is born in us to protect us. The problem is when that bias is led by fear and inaccurate understanding of individuals or cultures That’s what leads to the bad side of things.”
That goes for AI as well. Fear or misunderstanding of others can actually make its way into the development of a tool, he said, and once it makes its way in, it’s very hard to get it out.
He advocates a system of “good bias” that is not going to be xenophobic, misogynistic, racist, or homophobic. “I want all systems to be that way,” he said. “But I also believe that it can’t go into hyper overdrive because then it’s going to harm us. That’s why we have to understand bias and we have to remove bad bias from these algorithms.”
Aside from inclusive innovation, removing bias requires “sourcing ethically, cleaning [data] and monitoring your data,” Jenkins said. “That’s how we get to the point where we can hopefully one day not have to worry about the bad bias because it’s sort of been wiped out. That’s my goal.”
The problem is, as moderator Caleb Ward points out, the competing pressure to make money from AI product risks ethically sources models being relegated behind the drive to monetize. Not even new AI regulation in Europe or the US might be sufficient to stop it.
“It’s an arms race right now,” Jenkins agreed. “There’s a lot of money being thrown around and that sometimes drives product that it not ready for primetime, without being fully vetted for what their impact will be. It’s not just about the tool in itself in the sense of helping the creative, it really is about the impact that it has on the user and on our society as a whole. That should be one of the primary things that all of these companies take into account when they’re doing this.”
Jenkins says he is saying to executives that there’s a way for them to continue to make “all of the wonderful financial gains that you’re making and for you to continue to create phenomenal tools, but there’s also a way for you to protect users.
“Because in truth, if you’re doing something that’s harming your users, that’s bad business. You know it’s bad business because over time you you’re going to run out of users.”
Everybody in media and business generally from the C-Suite on down needs to be educated about AI risk and reward.
“The more that you’re educated about it the more that you’ll understand. When you see something that could actually go in the wrong direction then you have the responsibility to say ‘let’s slow this down’ and try and make sure that we’re helping,” he says.
“You got to protect the people and truly you shouldn’t be creating anything that’s going to cause harm.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
A new SMPTE report explains how artificial intelligence and machine learning are being used for production, distribution and consumption.
March 24, 2024
FOX CTO Melody Hildebrandt: Why Broadcasters Need to Take the Lead With AI
TL;DR
As FOX’s Chief Technology Officer, Melody Hildebrandt is spearheading efforts to address the challenges posed by deepfakes and AI-generated misinformation.
Hildebrandt urges broadcasters to lead the technical conversation against Big Tech, advocating for a united front to leverage and manage AI technology responsibly within the industry.
FOX adopts an “AI optimistic” stance, she says, recognizing the potential for new capabilities and economic opportunities. However, this stance emphasizes the importance of publishers controlling the use of their intellectual property within AI models.
FOX has launched Verify, a project to cryptographically sign all its online content, ensuring its provenance and combating misinformation. This technology uses blockchain to provide a tamper-proof record of content authenticity.
FOX aims to build a coalition around the Verify protocol, encouraging collaboration over competition among publishers to set a unified technical standard that counters Big Tech’s influence and protects content integrity.
Melody Hildebrandt began her career designing war games for the Department of Defense. Almost two decades later, as chief technology officer at FOX, she is leading a major initiative to combat the threat of deepfakes and AI-generated misinformation.
It’s a fight that media needs to take to Big Tech, says Hildebrandt, and she wants broadcasters to unite, fight back and flourish in the new AI economy.
“It’s time for broadcasters to actually take the lead in the technical conversation by defining the core architecture about how our industry is going to run in the future.”
The media industry needs to take defensive and offensive positions to manage AI, she asserts.
“We are in the ‘AI optimistic’ camp. We are bullish on the new capabilities and economic opportunities. But we also believe publishers should control how their intellectual property is used and commercialized within AI models — whether that is LLM training or real-time Retrieval Augmented Generation (RAG) via chatbots.”
“In this age of AI-generated media abundance the content coming from trusted brands like ours is going to be more important than ever. Consumers are going to rely on brands like FOX to help them navigate the information space in front of them. But trust is going to be exploited in this new information space. So, to thrive in this new area there are certain guardrails to put in place.”
Hildebrandt is the company’s lead on Verify, a project that sets out the technical foundations for proving the provenance of media that FOX publishes. Development started a year ago and the tech is already out of the gate.
Every single piece of content that is published online by FOX News or FOX Sports or from any one of its local stations, is now cryptographically signed with Verify.
“The moment content goes online it gets simultaneously written on the blockchain and can be verified using the tool,” she says.
Prior to becoming CTO, Hildebrandt was president of FOX subsidiary Blockchain Creative Labs, leading the broadcaster’s information security program. Now she is running seven strategic AI projects for the company.
Her team approached the challenge as both problem and opportunity. On the one hand, FOX content is valuable and there is an opportunity to derive revenue by licensing it to LLMs. On the other, there’s a need to combat misinformation and mitigate the threat of reputational damage.
She cites a post by a Twitter (now known as X) user with more than a million followers that purported to be a repost of a FOX News story falsely stating that Saudi Arabia had entered the war in Israel.
“How are consumers going to navigate this information space and know that the content that proports to be from FOX is in fact from us? As we explore these technologies we have to make sure that we preserve our brand and don’t do things to undermine the trust that our consumers have with us.”
Bringing both business and consumer problems together resulted in Verify.
“We think the solution is fundamentally the same which is to create a cryptographic hash of an image on publication and store that on the chain. Users can then compare that to another image by a simple drag and drop and tell if they’re actually the same. We thought that was the right way to tackle the problem, at least on this version one of release.”
Verify would seem to diverge from that of the C2PA, a similar content credentials initiative backed by Adobe, The New York Times, the BBC and OpenAI. In fact, FOX supports C2PA too.
The C2PA standard is part of the Verify manifest. The C2PA has worked to embed metadata in content straight from camera to the moment of publishing, tracking all the ways it has been transformed as well as labelling that for anyone to check. Verify essentially picks up from the moment of publication to offer a way for publishers to monetize their authenticated content.
“We believe an open source, publicly verifiable and legitimate method of content ingestion from trusted sources is the better, safer way for models to be trained and to reference published content,” Hildebrandt says. “This is what we hope is a productive first step, an open source starting point grounded in technical solutions.”
She says LLM developers like OpenAI have been receptive. Since LLMs need data to train their AI models “they recognize that they’re going to need to start paying for content. I think that’s where this is going and where standards are interesting.”
Hildebrandt continues, “A company like OpenAI would say they can’t interact with thousands of publications and do bespoke integrations with all of them. But we can offer Verify as a common standard that facilitates that interaction. OpenAI benefits. Thousands of publisher’s benefit.”
An example: The Des Moines Register could use Verify to licence its content while LLM developers won’t have to create a bespoke technical integration in order to legally access, and pay for, the paper’s content.
She says this differs from the problem that C2PA is trying to solve. “The work of the C2PA is super important but doesn’t solve the business problem of how it applies to LLMs.”
Licensing Verified content offers FOX upsides in terms of “the business guardrails that we can impose and when we want to negotiate deals for our content with large language models.”
No company today has the ability to scour the internet or social platforms for every misuse of its content let alone enforce take-downs, but Verify will there for adjudication in case of dispute.
“The most important thing is that we have now the proof. If there’s a downstream piece of content that we see has been manipulated, or used without license, then we actually have the evidence on the chain. It’s a tamper proof record. We can see exactly when we signed it, in the exact context. It’s a line in the sand. But can we scan the internet to find every misuse of our licensed content? I don’t think anyone has solved that yet.”
FOX is among media groups helping to educate politicians and legislators about the threats of AI and how media would like to see it regulated. “We’re part of that conversation and my observation is that the Hill would very much like to see a market driven solution for the business problem,” Hildebrandt says. “That’s one reason that we’re excited to put Verify out there.”
She makes the analogy of trying to move the industry from Napster to iTunes. “It’s a kind of Wild West in the beginning but the technologies are maturing to the extent that, with Verify, we can offer a way for users of AI to pay for content the licensed way.”
“As a community of broadcasters we need to think about licensing and protecting our content and helping consumers to navigate that information space.
“At FOX, we’re hoping to build a coalition to build on this protocol. We’re assembling an initial founding group of publishers who want to help shape the future of the protocol. They might want to input into features they want to see developed. Because Verify is open source anyone can contribute to the project and anyone can build their own extensions to it.
“This is an area we think we should all collaborate on. We may compete on the substance but we should collaborate on the technical infrastructure because otherwise Big Tech will impose its infrastructure onto us. It’s a moment for us to take a stand.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Leading AI and computer-vision researcher Dr. Hao Li will dive into the cutting-edge world of generative AI during his keynote presentation at the opening session of the 2024 NAB BEIT Conference
March 24, 2024
Posted
March 24, 2024
Superfan Connections are Key to the New Creator Economy, Contends Patreon’s Jack Conte
TL;DR
Patreon CEO Jack Conte explains how the current internet algorithms are killing the traditional “follower” for creators, threatening their creative freedom and livelihoods.
Conte advocates for the new spaces on the internet (hey, like Patreon) where, he says, creators can always connect with their communities, create what they want, and control their own destinies.
The hallmark strategy of these businesses is the focus on deeper connections, as opposed to just more connections.
The internet may have started as a platform that democratized creative distribution for creators who could build legions of followers, but Patreon CEO Jack Conte says that model is broken. Rather than stand by witnessing the demise of the follower, platforms like his are offering a new way for creators and fans to connect in deeper, more fulfilling online communities.
“The next decade of professional creativity on the internet will be organized around the concept of the true follower in an effort to build a better way that art can exist on the Web,” Conte said in a presentation at SXSW.
Once upon a time creators could upload their work to platforms like YouTube and immediately have it accessible to millions of people. After that came the “subscribe” button, which enabled creators to go beyond reach. Now they could build a following and find their true fans who would support them to build a creative business.
But with the rise of platform-focused algorithms (Facebook’s ranking, TikTok’s “for you” curation), creators cannot reach their following and true fans. This shift has had a devastating impact on creators’ creativity and ability to support themselves doing what they love.
“Ranking was great for Facebook’s business, and people started spending even more time on the platform, so the other platforms had to compete. Now I think of the 2010s as the decade when the original promise of the creator-led community, the true follower, was broken,” Conte said.
“What it meant for creators was that your followers might not necessarily see your posts. It’s not really a direct true connection between a creator and their fans if the channel of distribution is broken.”
TikTok’s arrival shifted eyeballs from Facebook but didn’t fundamentally alter the broken fan-creator contract, in Conte’s view.
“TikTok’s algorithm ‘chose’ what videos to serve you in your feed and completely abandoned the concept of the follower,” he said.
But it worked, and TikTok hit a billion users by 2021. As traffic started flowing away from legacy social companies and toward TikTok, Facebook, YouTube and Twitter have been forced to launch their version of shorts, reels, or feeds to compete.
The result, said Conte, is that “the whole system of organization for the internet, the creator-led community, started to fade into the past.”
Conte started out as a creator himself. The result is that “my fans don’t see as much of my stuff anymore. It’s harder to sell tickets to a show. It’s harder to reach people with my new work. It’s harder to build community. It’s harder to build a business. It’s harder to energize my fans,” he said.
“The single most important problem that faces creative people today is the weakening of creator led communities of our distribution channels. To our fans, this is the hardest, most challenging and most painful issue threatening the present, and the future of creativity on the internet.”
Conte doesn’t actually believe that the “death of the follower” will happen because there are a new breed of creator-led social platforms coming to the rescue. These include Discord, Kajabi, Fourthwall and Gumroad, but it should come as no surprise that he positions Patreon as the leader of the pack.
Conte said the hallmark strategy of these businesses is the focus on deeper connections, as opposed to just more connections.
“The follower is too important, too valuable to ignore so the next wave of internet and media technology companies are going to try to solve this problem. The incumbent social platforms are not gonna be able to fight it because their revenue relies on maximizing attention to drive their businesses. They are being forced towards discovery, towards reach, personalization and algorithmic curation. These are the levers that drive attention and therefore drives their strategies.”
He argued that real value for creators is to be found in the real fan, or super fan. Just 5% of these fans drive 90% of the community. “This is a direct to fan business. This is an ads business. This is about depth of connection, about maximizing attention. This is about deeper fans,” he said.
“Creators just need a thousand true fans who really connect with you and believe in you. This is different than just reaching people. It’s even deeper than followers. These are super fans, true fans, real fans,” he continued.
“The idea is that this group of people is your core. If ‘reach’ means people see it, and ‘follower’ means people want to see more, then ‘true fans’ are the people who go to the shows and buy the merch and download the record and pay for the course and get the live stream tickets. This idea really resonated with me.”
To that end, Conte said the next decade of creative and media technology companies will focus on building direct to fan connections and community strength.
“As creators, we still need the social platforms for discovery and reach. But those companies will be one component of the many tools that we have as creative people to help us run our communities and businesses.”
Patreon was founded in 2013 and now employs 400 people and supports more than 250,000 creators, who have made over $3.5 billion dollars on the platform, according to Conte.
He says he longer thinks of Patreon as a membership platform but more of a “true fan company, a creator company, where we’re building a better way for art and community to exist on the internet.”
Perhaps it is subscription fatigue or financial squeeze, but he says that many fans no longer want to pay to subscribe to content on his site.
“Rather than having those true fans leave the creator we want to give creators a way to start forming deeper connections with those fans to build businesses.”
It now offers a way for creators to sell digital products like videos, podcast episodes, images, and other files directly to customers, whether they’re a member or not.
“Fans can now participate in the creator’s business and community while the creator can build an awesome business along the way. The logic is very similar for free membership.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 26, 2024
Posted
March 24, 2024
How Broadcasters Can (Will) Get With the Social Program
TL;DR
Michael Depp, chief content officer and editor at NewsCheckMedia, is the curator of the Programming Everywhere track at NAB Show, which aims to merge the worlds of traditional broadcasting and social media.
These influencers will share insights on building dedicated audiences around niche content, emphasizing the importance of direct engagement and adapting content based on audience feedback.
If broadcasters are going to grow their audiences — attracting younger generations — they can learn from leading content creators. NAB Show brings the worlds of TV and social together for networking and conference sessions designed to spark ideas and partnerships.
“Part of the problem that linear television is suffering from is that it has got stuck in a rut of moribund thinking,” explains Michael Depp, chief content officer and editor at NewsCheckMedia and curator of the Programming Everywhere track at NAB Show. “The premise of Programming Everywhere — now in its second year — is to convene a varied group of people across the media industry to talk about content production in a holistic way,” Depp says.
“Typically people in media and in television particularly stay in their lanes,” he continues. “Those lanes could be syndicated programming, sports, news, or they could be distribution on platforms like streaming, FAST channels, social media or digital.
“The industry’s problems may have been greatly accelerated by the fragmentation of media and the proliferation of streaming channels but broadcast execs need to begin to think about their programming needs in a more expansive way.”
Instead of revisiting “the same peer group sitting on the same panels across a typical conference day,” he says, “what we wanted to do was expose different constituencies to each other, to mix these people up and try to spur some new thinking about how to fill the many, many hours of daily programming that they now need to attend to.”
Broadcaster programming needs have expanded because their distribution channels have now expanded. Aside from linear channels, broadcasters all have streaming services such as digital catch-up or free ad-supported versions of linear stations.
Last year, Programming Everywhere convened executive decisionmakers from pretty much every US broadcasting company and layered onto that a good number of content creators, show runners, producers, talent and technologists.
This year at NAB Show, these an additional layer of content creators who are native to social media and who have built huge audiences and successful businesses.
“They’ve made an end-run around the whole gatekeeper process of television and brought their video content directly to audiences via platforms like YouTube, Instagram, TikTok. These are mostly very young people who are simply passionate about something that they want to share,” says Depp.
“I thought it was very important to bring people who come from that mindset and generation and put them in the same room as the people who work in conventional television. The idea is to shake things up and put front and center the kind of out-of-the-box thinking that creators are practicing and broadcaster claim they want to embrace.”
Technologists are an important part of the conversation, too. They can share insight into how to boosting efficiencies in the production process since everyone needs to make more programming with less money.
The “Social Media/Streaming Stars on Growing Niches Into Audiences” panel Depp is moderating showcases four extremely diverse talents from very different subject areas and different backgrounds. They will share insights into why they gravitated to their particular niche; how they developed a content creation regimen that they stick to; how they continually calibrate that based on metrics about audiences; and then how they maintain that relationship with their audiences.
“Whether they represent national companies or local stations TV executives will find something to learn from the people making media in this way. Now is the time that they need to lean in and pay attention to this mindset and this way of producing content,” says Depp.
“No one expects broadcasters to lift a content creator and just plonk them into TV. It’s been tried and failed too many times. But there could well be new ideas to be mined and if you don’t at least have an open mind you will never find out.”
The speakers on this panel include Jacklyn Dallas, who, like many content creators, began producing content as a teenager. Dallas creates how-to tech videos to energize and educate her Gen-Z audience on how technology impacts their daily lives and the most important trends to watch. Dallas does so with such success that she was invited to Google HQ to interview CEO Sundar Pichai.
“Jacklyn is very clear, very well-informed with remarkable access to major tech leaders,” says Depp. “Her content is accessible to a broad audience and she’s an extremely enthusiastic personality.”
Representing another side of the popular tech subject on social is Quinn Nelson. He presents consumer tech reviews on his channel Snazzy Media. Like all of the panelists, he’s a smart, articulate person with a strong social following. Quinn will speak to the ongoing process of calibrating content around the very granular metrics that you get from platforms like YouTube.
Travel is a major subject area on social and obvious broadcast programming overlap. You can hear best practices for the “video hustle” from professional content creator Juliana “TravelingJules” Broste. The winner of 12 Heartland Emmys, Broste can speak with experience about how to create content that cuts through in the competitive lifestyle/travel space.
Last but by no means least, the fourth panelist on this session is Sean Sotaridona, popularly known as Sean Does Magic. The Dutch-American magician and TikTok star is famous for posting magic-related videos. “With an incredible 33 million followers, Sean is a rock star creator who performs street magic with the global fanbase of David Copperfield. And he’s only 21,” says Depp.
While those in TV should listen to the content creators, it is not as if this is a one-way street. “The thing is that the content creators are also interested in television,” he explains. “In many cases — and this is a kind of a key point to get across here — because the NAB Show is trying to evolve and get outside of broadcast parameters and become more of a content creation space. Broadcast and social are media for content creators (or producers, writers and directors in traditional broadcast parlance) and NAB is an ideal place for them both to mix.”
Depp says, “Content creators should think of seriously about coming to NAB to get in the same room with the decision makers in television because those people who make programming decisions are looking for something new that they haven’t seen before.”
“It points to the precariousness of the creator’s position where they’re dependent on a few prime distribution platforms,” Depp says. “Anytime there’s an algorithmic tweak that has implications for their reach and now there’s something more existential like TikTok being forced to cease of sell. “If a creator’s aim is to be distributed as widely as possible then why shouldn’t television and TV’s digital channels be part of that mix?”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Alan Wolk, Co-Founder/Lead Analyst at TVREV, will be moderating the NAB Show session “The Future of FAST: Lessons Learned and What’s Next,” Tuesday, April 16 at 3 p.m.
Here, he confers with NAB Amplify’s Emily M. Reigart about social media, spatial computing, success within the creator space… and, of course, sunglasses as a way of life.
You’re famous (infamous?) for wearing your sunglasses on camera in all sorts of situations. Can you share the backstory behind this signature look? Great question! I started wearing glasses in my videos for a very practical reason — so it would appear that I’m looking directly into the lens, but when in reality, I was watching the camera monitor to make sure the shot looked correct. After some time, people started to see it as ‟my look,” so I just went with it — now I even sleep with them on!
There is a great quote: “With my sunglasses on, I’m Jack Nicholson. Without them, I’m fat and 60.” While I’m not fat, 60, or Jack Nicholson, I do appreciate what he was saying here.
You’re active on a number of social media platforms. Do you have a personal favorite format, or one that you find more challenging to create content for? YouTube will forever be my favorite. I still think it’s the #1 place for creative expression. As social media has drifted towards monetizing influence, sharing hot takes and chasing likes, YouTube is the last bastion for storytelling.
If you were a creator trying to break through in 2024, how would you try to differentiate yourself? You have to be honest. There is a huge void of truth in the creator landscape — the temptation is to find someone who’s successful and copy them. The audience will always reject this. Instead, find your truth and tell it.
Your motto is “do the impossible.” What’s your next impossible project? I’m not sure that’s my motto. I’ve never been able to find any success in following the traditional route in life, the paths we’re told to take. Instead, I try to find my own.
When I think of the future, I point my compass at what’s the most rewarding. Currently, that’s being a dad to my two little girls.
Will spatial computing and technology like Apple’s Vision Pro change how you create content? If so, how? Maybe, but I don’t know. I still fancy editing software to make straight cuts, the same style of editing that used to be done with scissors and tape. As technology continues to progress, telling a great story remains the same. It’s hard, it’s human and the technology is simply a tool to help communicate — not the story itself.
Casey Neistat is most famous as a YouTuber, but that wasn’t his goal… his career “wasn’t an option” when he started creating videos.
March 21, 2024
Creator Economy Amplified: Building the Creative Stack
Watch “Creator Economy Amplified: Building the Creative Stack.”
TL;DR
Blackmagic Design’s Bob Caniglia joins Jim Louderback, editor & publisher of “Inside the Creator Economy,” and veteran journalist Robin Raskin to discuss the essential hardware that powers today’s creator economy, offering a roadmap for assembling a creative stack that aligns with both vision and budget.
The pandemic has made production equipment more accessible, enabling home-based, professional-grade content creation and lowering entry barriers in the creator economy.
Advances in LED lighting and microphones have transformed production, significantly improving content quality through better lighting and audio.
AI integration and virtual studios are reshaping content creation, with metadata and shoppable videos poised to enhance engagement and monetization.
Choosing scalable and modular equipment for new studios is essential, supporting future growth and avoiding the need for frequent, expensive upgrades.
Navigating the creator economy is akin to exploring a vast digital ecosystem, where content is the currency and creativity knows no bounds. Advancements in technology have leveled the playing field, equipping creators with the tools to turn visions into visuals and ideas into impact like never before.
This shift towards democratization has opened doors for creators at all levels, breaking down traditional barriers and offering new opportunities to engage and captivate audiences. However, the path to creating content that truly resonates with audiences is not without its challenges. Beyond creativity and storytelling, it’s crucial to use the right tools. They not only enhance a creator’s vision but also ensure that the final product stands out in a crowded digital space, and in today’s fast-paced creator economy, leveraging these tools effectively can significantly influence a creator’s ability to connect with their audience.
As part of NAB Amplify’s “Creator Economy Amplified” series, Bob Caniglia, director of sales operations at Blackmagic Design, sat down with Jim Louderback, editor and publisher of Inside the Creator Economy, and veteran journalist Robin Raskin, founder and CEO of Virtual Events Group, to share insights on the essential hardware that powers today’s creator economy. These industry pros offer a roadmap for assembling a creative stack that aligns with both vision and budget — watch the full conversation in the video at the top of the page.
The Affordable Production Revolution
The pandemic has sparked significant changes in content creation, shifting how and where creators bring their ideas to life.
“What has happened over the last couple of years since the pandemic is a lot of people have been able to purchase equipment that allows them to do productions at home that they may not have considered in the past,” Caniglia, who has worked in the film and television industry since 1985, recounts, reflecting on this evolution and highlighting a broader trend.
“At Blackmagic,” he continues, “we’ve been able to create some of those products in a price range that makes it very affordable. So people are starting to buy little switchers and some cameras, and the next thing you know, they’ve set up an entire studio.”
This trend towards democratization has significantly impacted the creator community. It has leveled the playing field, allowing emerging creators to produce content that can stand alongside that of more established names.
The result is a more vibrant and diverse ecosystem, enriched by a wider array of voices and perspectives. By making professional-grade production more accessible, the industry isn’t just changing the tools creators use; it’s transforming who gets to tell stories and how those stories reach audiences around the globe.
Lighting and Audio: The Pillars of Quality Content
When it comes to producing quality content, lighting and audio are non-negotiable, Louderback and Raskin agreed. The duo is behind the all-new Creator Lab at NAB Show, a hub for exploring the latest trends and technologies with a full schedule of talks and hands-on workshops featuring industry pros.
“Lighting is just so important,” Louderback emphasizes. “There are all sorts of inexpensive LED lights out now that can do amazing things,” he says, noting that the advent of affordable LED lighting solutions has revolutionized the way creators approach production, allowing for professional-quality lighting setups on a budget.
“You can have lights that plug into your smart home so that you can say ‘I want to turn on the studio lights,’” he adds as an example, “So look into lights again, because they get cheaper and cheaper and better and better. And there’s so much more you can do with them.”
Audio, says Raskin, is another element that is often overlooked. The pandemic underscored the necessity of high-quality audio as creators sought to improve their production values. Upgrading from built-in laptop speakers and webcams to superior microphones can dramatically enhance the clarity and fidelity of audio, she advises, elevating the overall quality of the content.
“People don’t realize how important it is,” she says, sharing that while she employs a range of solutions she’s currently using a podcasting mic from Shure. “The sound is crisp and accurate, and so much better than my laptop.”
Connectivity and Mobility: The New Frontiers
In today’s creator economy, the ability to produce content remotely and on the go has become invaluable. “Think about your home networks,” Louderback urges, stressing the importance of robust Wi-Fi connectivity for seamless content creation and live streaming.
“If you’ve got an old wireless setup, and you can’t run a wire from your router to your desktop or your notebook, upgrade your internet, think about some of the newer versions of Wi-Fi, think about running multiple different base stations, think about any way that you can go out and do a better job with Wi-Fi.”
To boost connectivity, Louderback recommends Wi-Fi 6-supported products in particular, along with employing multiple base stations to your home router. “All of these things can make sure that you don’t get dropouts, and that everything that’s going into your computer gets up into the cloud without losing lots of fidelity.”
Wireless microphone technology has also improved drastically, he says. “Anker and DJI and Rode, and a couple of others, all have these really cool wireless mic kits, where you get a really small little pod that you can stick on yourself or the person you’re interviewing.”
Using wireless audio in the field “sounds really good,” and makes production “so much easier,” Louderback says. “That little microphone you stick on the lapel, it actually will record the audio and save a version of it on that as well. So if something goes wrong in the field, it always does, you’re going to have an ISO of that audio so that you can fix it and post.”
Caniglia touted the Blackmagic Camera app for iPhone as a game-changer for mobile content creation, offering professional camera settings and cloud integration for easy file transfer and editing.
“[The Blackmagic Camera app] creates a better version of your iPhone. In terms of recording for video, it has a lot more of the settings that you would see in a Blackmagic camera,” he says, explaining that the app is also tied to Blackmagic Cloud, which allows users to send files directly to the cloud so they can be edited from the field. The app also allows creators to use their phones as a second camera and still achieve high quality results. “We’re going to see a lot of people on the uptake of doing that.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Beyond hardware, software also plays a crucial role in the creative stack. Caniglia discussed how DaVinci Resolve, the company’s free editing and color grading software that has become a cornerstone for post-production, is enabling creators to collaborate and expedite the finishing process without compromising on quality.
“Almost 40 years ago, [when] I went to editing school, I would have needed $500,000 to set up an edit system in my house. And now it’s free.”
Looking Ahead: The Future of Video Production
As the creator economy evolves, so too do the tools and technologies at creators’ disposal. Louderback expresses excitement about the integration of AI in video production, particularly in audio enhancement, foreseeing a future where AI acts as a “copilot” for creators.
“I want AI in my camera and AI on my phone, AI in my audio, so I’m all up and ready to do a shoot,” he enthuses. “I can’t wait to see that happen.”
Studios, says Raskin, will become a thing a past, “at least the ones with nails and hammers and wood.” Instead, virtual studios will become the norm. “So your studio can be as big or small or anywhere in the world, and that’s going to change all sorts of events and all sorts of content.”
Metadata will turbocharge video’s utility, Raskin predicts. “Video, I used to call it unconstructed data, it’s just there, nobody knew what to do with it,” she explains. “Well, now, through meta tagging… all of a sudden video becomes something that’s structured, and you can learn what keeps people’s engagement, what gets them involved.”
Shoppable videos will are another big trend to watch out for, she says. “You can see Amazon and some of the big players, Etsy, Pinterest, all doing shoppable videos, and it’s going to be even more so this year, we’re all going to be selling things to each other.”
“When you think about buying your initial setup, when you’re just getting started, make sure that it’s stuff that you can grow with,” Louderback advises, noting that many camera setups, and other equipment, allow users to build systems on top of them. “You can stack, you can grow with your equipment. Make sure you look for modularity and upgrade ability as well,” he says, “rather than buying something, and then be like, ‘I’m gonna throw it out, and I’m gonna upgrade to something new.’”
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 19, 2024
Posted
March 19, 2024
Hey, Sam! Tell Me About Audience Measurement
TL;DR
Consistent measurement practices are more important in the age of “build your own bundle” TV.
Nielsen’s Paul LeFort shared insights into how his company handles audience measurement in this increasingly fragmented era, when it’s common for viewers to watch OTA, streamed and pay TV content in one week (or even one sitting).
“Audience measurement has long been at the heart of the media business,” says NAB EVP/CTO Sam Matheny.
That continues to be true in the age of what Matheny calls “BYOB TV,” referring to those who have “stitched together” a modern version of a cable bundle, often combining a variety of streaming and over-the-air offerings.
Matheny discussed the importance of consistent measurement practices in a fragmented market with Paul LeFort, managing director for Nielsen’s local TV business during the latest installment of NAB Amplify’s “Hey Sam!” interview series. Watch the full conversation (above) or read on to learn more about LeFort’s perspective on viewing has — and hasn’t — changed.
Understanding Video Consumption in 2024
First of all, LeFort points out, “The piece that really stands out is the consistency of over-the-air television. Ten years ago, [it] was about 12% of all households in the US. Now, it’s about 15% of the households in the US receive over-the-air television. So while there’s this tremendous churn in the streaming landscape, in the pay TV landscape, the resilience of over-the-air television remains consistent.”
He concludes, “And clearly, that is a main connection point between viewers and their communities. The local news that broadcasters provide, [they] do a stellar job of covering their local communities.”
One reason for the churn elsewhere in the marketplace is that customers have come to expect a pattern in which “you sign up, you binge it, you blow it out in a couple of weeks, and then you terminate the service,” LeFort says.
Subsequently, “All of these streaming services, they’re making it easier for folks to come in and out of that ecosystem. When there’s a big event, March Madness, Super Bowl, you know, golf tournaments, etc., if there are reasons that you want to sign up, you’ll find those reasons.”
But that convenience-driven fluidity makes understanding metrics more challenging for advertisers (and industry watchers, natch).
“The evolution, the investment, the complexity of what it takes to measure local television in the US, it’s never been greater,” LeFort admits.
He says, “We take that very seriously, the responsibility that we have to measure these audiences, report them in a way that’s comparable, give our clients data that lets them make decisions, not just about their content, but about their revenue and their advertising.”
“Advertisers,” LeFort says, “don’t really care what the platform is. Whether it’s a stream, or whether it’s a broadcaster satellite, they just want to be able to reach their consumers. And conversely, those content creators, whether they’re creating local news and creating premium scripted content, or a reality show, they want to understand, is my impression bigger or more or bigger or smaller than those other providers of the content sources that are out there.
“And so while impressions are the great leveler, it’s being able to look at those different viewing sources, in a comparable way that allows advertisers and content readers to understand their worth relative to their competitors.
But in 2024, LeFort says, “There is no one single solution anymore. There’s no one cable lineup, there’s no one broadcast channel, it’s always going to be a blend of technologies and platforms. And we see that all of those datasets that have their own inherent value in amongst themselves. How do you harmonize them? How do you correct them? How do you make them comparable across content owners and advertisers and broadcasters. And that’s the role that we play — is to bring those things together and do it in a way that is transparent, that can be audited, that fully follows MRC accreditation policies and procedures.”
The fragmentation of the video market “almost amplifies the need for that measurement, even more so because everybody has to understand where they sit in this ecosystem, and do so in a comparable way,” he says.
“You’ve got to have similar metrics,” LeFort says. “And they have to mean the same things, whether it’s broadcast television, or a stream or or cable or satellite. And so listen, monthly average users, that’s important, because that shows how many people are finding you clicking once or 100 times you get counted as a monthly average user. So it does have value in my opinion.”
He explains, “It’s fundamentally important to understand how many people touch my content, how many people tune in, how many people launch a stream …However, [MAUs] misses the other really important part of any kind of audience measurement, and you set it as well, how much time do they spend, and at the end of the day, every number you see from Nielsen or any legitimate measurement company, it’s going to involve those two things, how many people tune in how much time that they spent. And from those two measurements, you get the GRPs, you can get the gross rating points, you get share, you get impressions, you get reach, you get all of those things, but you have to start out with those two fundamentals.”
Making Sense of the Mess
So how does Nielsen do it?
First, LeFort explains, “Big data has a ton of value and a lot of benefits. And that’s why we’ve gone full tilt at incorporating those sets of big data.”
“One source of big data is directly from the streaming platforms themselve,” he says. However, “The census data” from streamers and pay TV companies about user habits “exists in most cases behind a wall. So Netflix knows a ton of information about what you look at on Netflix, what I watch on Netflix, and how we’re all different and serves us up things that they think we want to watch.”
Streaming video “platforms have Nielsen technology built into them,” LeFort explains. “So when you sign up for Disney+, or sign up for TV, or Peacock, go into your user settings, everybody clicks right through …you’ll see Nielsen measurements specified in those in those settings.”
Also, “We’re working with Comcast, and Charter and Rush, DirecTV and Dish, obviously to collect their big set top box data. But the Dish TV looks very different than the DirectTV customer does, right?”
That’s where Nielsen’s audience panels come in. He says, “Panels have such a fundamental role to provide full market coverage and representation across ethnicities, geographies, different households, kids, household language, all of those things that are so relevant to measuring video, Spanish speaking household viewed very differently than non-Spanish speaking household, older households viewed very differently than younger household with kids. And so those panels provide really the foundational truth set that we can correct for the biases that we see in big data.”
LeFort says, “It’s the combination of those big datasets, whether it’s from the streamers, whether it’s from pay TV set top boxes, or from Nielsen’s panel that allow us to harmonize and provide things that are representative and comparable between those different.”
Nielsen then “harmonizes all of that big data from census providers [streamers], like you mentioned, with the fully representative panels that we create.”
NextGen TV to Change OTA Measurement
Additionally, Matheny points out, “Seventy-five percent of the U.S. television households are now in a market where a NextGen TV signal is being broadcast. NextGen TV was designed, you know, from a blank sheet of paper to be both an over-the-air broadcast transmission as well as a connected television solution that leverages the internet.”
“NextGen TV, and its ability to collect privacy protected information from those households from those viewers is tremendously exciting,” LeFort says, “because it’s going to give broadcasters for the first time ever really first party first party data at scale that can be used for a whole bunch of different applications, right? Whether it’s behavioral targeting, digital target, making television — linear, live, local linear television — a lot more like digital for advertisers to buy. That’s incredibly exciting. A ton of potential there.
“And then finally, from a measurement perspective, one of the really exciting things about that first party big data is it will for the first time ever provide a big data source or over the air household, this important resilient, sizeable section of the marketplace. And so as we look at always adding more big data sources to our measurement, as NextGen grows and scales, we think there’s an opportunity to build a big data source for over-the-air based on that.”
FAST is still an emerging industry, but all signs point to it being one that will remain part of the entertainment diet for the future.
The number of FAST channels in the US has skyrocketed to nearly 2,000 with no signs of slowing down, finds a new report from researcher CRG Global.
Forty percent of US adults regularly watch FAST and that figure is rising, making it an important part of the country’s media mix.
Gavin Bridge, VP of Media Research at CRG Global, will reveal the latest research and all you need to know about planning a FAST future at NAB Show 2024.Register here with the code AMP05 to attend.
Media execs know that Free Ad-Supported Streaming TV is a revenue generator — until they don’t. There’s a lot going on under the hood of linearly-scheduled, advertising-funded streamed channels. If you want to understand how to make the most of the opportunity, then a new report from CRG Global, in partnership with NAB Amplify, is your ticket to the inside track.
The session will explore the current state and future trends of free ad-supported streaming TV (FAST), a rapidly growing segment of the OTT market, and joining Bridge will be: Bethany Atchison, Vice President of Distribution Partnerships at VEVO; Michael Hyon Johnson, Director of Operations for ElectricNOW; and Michael Senzon, President of Digital for Allen Media Group (AMG).
The speakers will examine how, In a rapidly evolving media landscape, traditional television is being overshadowed by innovative streaming solutions.
FAST is one of these innovations, and has surprised many with its rapid growth in the last few years. This session with leading executives from the FAST space will examine the opportunities and challenges of running a FAST channel, such as content curation, personalization, discovery, loyalty, optimization, localization, and monetization.
The panel will also discuss the role and responsibility of platforms and publishers in delivering value to viewers and advertisers, and in fostering a healthy and diverse FAST ecosystem.
By the Numbers
The end result for attendees will be a greater understanding of what FAST is and, if not yet a participant, provide insights as to why FAST could be part of your media strategy.
CRG’s research peels back the layers on FAST, revealing not just its current state but also its trajectory in the Media & Entertainment industry.
For a start, about 40% of US adults regularly watch FAST and that figure is rising, making FAST a very important part of the country’s media mix, the report finds.
Researcher CRG Global also reports that just under half of FAST viewers (48%) watch daily, with a further 39% watching a couple of times a week.
It’s for that reason that many non-traditional video operators include a FAST service within their portfolio. That includes the Disney, Fox and NBCU, as well as Dish, Charter and Comcast, Amazon and even Google TV and TV set manufacturers like Samsung. FAST is the reason Walmart acquired Vizio.
Taking a deep dive into FASTonomics, the report outlines what FAST both is and isn’t. There are competing predictions for growth in 2024, it finds. Omdia suggests the FAST market in the US will hit $7.4 billion this year, and S&P Global’s Kagan estimates it to be $6.2 billion, while CRG Global reckons it to be nearer to $5.1 billion due to the weaker ad market.
Yet FAST remains shrouded in secrecy, the report notes. Not one service publishes domestic users anymore — Paramount used to do so for Pluto TV, but ceased during the pandemic.
Instead the industry has to rely on external analysts to assess the market and put a value on it, which can be confusing given the blurred definition of the FAST business model.
What is not in doubt is that the number of FAST channels has grown considerably since 2020 from 489 distinct channels available across major services to nearly 2,000 at February 2024.
When NBCU announced last June that it would be making close to 50 FAST channels available for licensing, it marked a new point in FAST history. The scale of the launch was unprecedented and yet is just the tip of the content iceberg that many media firms have at their disposal.
What’s Seen on the Screens
The report also details the type of audience that watched FAST, showing that while pay TV still provides utility to many, FAST is filling a need for many that cable is not.
“If a time-traveling FAST executive from the start of 2020 suddenly found themselves in 2024, one of the chief elements that would shock them — aside from the total number of available channels — would be how news has embraced FAST,” says Bridges. “It is the explosion of local news that would attract the most attention.”
Four years ago, there were three local news stations available on key FAST services. That figure is now 231, with Scripps, Cox Media Group and Hearst embracing distribution across a number of major services.
“The extensive local offering helping cord-cutters stay in touch with their communities and allow for local stations to reach the greatest possible audience,” the report finds.
FAST is still an emerging industry. But all signs point to it being one that will remain part of the entertainment diet for the future.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Alan Wolk, Co-Founder/Lead Analyst at TVREV, will be moderating the NAB Show session “The Future of FAST: Lessons Learned and What’s Next,” Tuesday, April 16 at 3 p.m.
March 17, 2024
How “Nolly” Recaptures An Entertainer’s Brilliance/Resilience (and the Best of 1970s Broadcasting)
TL;DR
A new British period drama is a biopic of the star of a minor tea time soap opera called “Crossroads,” Noele Gordon, affectionately known to her many adoring fans as “Nolly.”
Cinematographer Sam Care and director Peter Hoar wanted to pay tribute to Nolly and recognize her achievements before and after “Crossroads,” but couldn’t ignore the broadcasting history lesson that a 1970s low-budget soap opera would give us circa 2024.
Combining a modern digital camera with vintage anamorphic lenses, the production employed a Sony VENICE camera with Meru lenses, which are rehoused vintage Leica glass.
The first scene of each episode was shot on 16mm film as an homage to the prominence of the format in television production over the decades.
To recreate the broadcast quality of the show, the production team also used Ikegami HK323 1980s broadcast cameras, which were sourced from a company called Golden Age TV.
Costume or period dramas are standard fare in the UK’s film and TV output, but Nolly is the first drama that puts old technology above the importance of the frocks. The limited series is a biopic of the star of a minor tea time soap opera called Crossroads, Noele Gordon, affectionately known to her many adoring fans as “Nolly.”
Cinematographer Sam Care and his director Peter Hoar wanted to pay tribute to Nolly and recognize her achievements before and after Crossroads. Still, they couldn’t ignore the broadcasting history lesson that a 1970s low-budget soap opera would give us circa 2024.
Care would also shine tribute to the broadcasting tech of the time and decided to go the extra mile to make it as genuine and realistic as possible.
“We decided to go for a mixed media approach. The idea was partly to pay homage to some of the formats they had shot within the period. So we combined three formats: a modern digital camera with vintage anamorphic lenses — a Sony VENICE camera with Meru lenses.” The Merus are rehoused vintage Leica glass.
“That was our main format, which still had a vintage edge because of the lenses, and we framed that for a 2:1 aspect ratio. We also decided to shoot the first scene of every episode on 16mm film; this was our homage to 16mm, a prominent format in television production over the decades. But these early scenes are related to specific periods earlier than the main story in Nolly’s life. Some occurred in the 1930s and 1950s, whereas most of the action happened in the 1970s and 1980s; it also helped with our mixed media approach. You’ll see the 16mm’s softness and grain structure, especially at 500 ASA, using Kodak Vision 3 stock,” Care said.
“We framed those opening scenes in a 1.66:1 ratio, slightly different from the main one,” he continued.
“Then, the third format recreated the broadcast quality of the show. We used these Ikegami HK323 1980s broadcast cameras, which we sourced from a company called Golden Age TV, which maintains them. They’ve supplied The Crown and various period shows. We had to re-light the set for them as they’re not as light-sensitive as modern cameras; we framed them in a 1.33:1 aspect ratio or 4:3, which is the ratio that they would have shot in back in the day.”
The show’s homage to technology also helped separate the on and off-camera scenes from each other as the drama entwines the two rather than a sketchy color-graded solution.
Exploring different ratios with different image textures within the one show was exciting for Care. “It’s all very well doing this, but you need a reason in the script, so it’s for storytelling reasons. It was also fun for the camera team and me. We got to use 16mm cameras again and revisit labs – checking the gate was new for some people.
The Kodak stock was rated at 500 ASA, but the Ikegami’s were working at only 50 ASA, so a new lighting design was needed for each. “Originally, the director wondered if we could shoot the Venice simultaneously as the Ikegamis, but ultimately, we had to shoot them differently.
“Even the actors noticed the studio heating up when we used the Ikegami cameras with the lighting to expose them properly. We realised that back then, that kind of heat was typical.
“So the lighting was the other element of being ‘period correct’ on the show. We initially looked for an old film studio in which we could build the sets that would have the old vintage walls and some of the rigging in the ceiling. But there was nothing available. We created them in Space Studios in Manchester, a very modern stage. We ended up building eight sets in there.
“Our big challenge was that we knew that one of our visual approaches was long Steadicam ‘oners’, which would show a large part of the sets. Also, we would start with a closeup of the cast, so the lighting had to be flattering. We also wanted to spin around and see some of the rigging in the ceiling with all the lights on show. We would then finish the shot back in a closeup.
“My challenge was that every lighting source we saw in the shot was also everything that lit the set and had to be period correct. So, my gaffer, Steed Barrett, and I got Panalux, who provided all the lighting for the show, to bring us one of every period light they had in their UK branches. We laid them all out in a room and tested them for a day. We ended up with 2K Zap lights and 2K Bambinos. So that ended up being over 200-period correct Tungsten sources in this enormous stage over the eight sets. They were all in-shot and wired back to our desk op.
“The lights had to be softer than the ones used at the time and flattering for our actors, so we tended to use the 2K Bambinos, harder Fresnel lights, as backlights or three-quarter backlights to get a nice edge light on the actors and the Zap lights we would put more on the side and in front of the set as they were an early sort of soft box I suppose.
“They would have a bulb inside that would bounce into a silver reflective inner lining with an early form of an egg crate that would give the light some direction. They were soft enough to light Helena (Bonham Carter) in a closeup and still look flattering. We did a lot of testing to figure it out and eliminate any shadows.”
For Nolly to represent Crossroads, as realistically as they have, made it a far better show than anything half-hearted. Ultimately, Care and his team’s skills didn’t include any need for digital set extensions, and virtual production only included a night bus scene. There were scenes from an actual excursion to Venice, but Thailand scenes were shot on a Bolton stage in the north of England.
If you’re planning to revisit old eras of television broadcasting for any narrative reason, then Nolly might be the new standard to live up to.
Series director Gus Van Sant travels back to 1966 for Truman Capote’s High Society New York ball, recreated as a Maysles-style documentary.
March 21, 2024
Posted
March 17, 2024
NAB Show Amplified: Getting and Keeping Audiences Engaged
TL;DR
Mike Chioditti, VP of strategy and business development at Endeavor Streaming, answers some questions about what’s top of mind for the industry and what he suggests looking for at NAB Show.
The panel will be moderated by nScreenMedia Chief Analyst and founder Colin Dixon. Chiodotti will be joined by speakers Vishal Parikh, LiveLike Chief Product Officer, and Tramon Thomas, Phoenix Suns VP of Digital, Social and Creative.
Attendees will learn how sports organizations and media companies are employing campaigns and engaging initiatives to successfully build community, strengthen brand and team loyalty, and create interactive experiences that keep fans coming back.
Here, Chioditti examines how changes in streaming have translated into shifts in monetization strategies for rights holders in the world of sports media.
What are the biggest trends impacting the community/industry right now Mike Chioditti: Across the industry, we’re seeing an increasing emphasis on monetization as well as the emergence of “membership” models. In prior years, many rights holders were focused on the basic blocking and tackling required to stand up a streaming offering. More recently, rights holders are now viewing themselves more as media brands in response to the evolving landscape. Forming a comprehensive direct-to-consumer strategy has become an essential part of an optimized media strategy.
This maturation has led to revenue growth strategies such as the integration of advertising, the introduction of premium tiers, and more thoughtful windowing between distribution channels. In addition, direct-to-consumer is increasingly being used as the foundational layer for broader “memberships” that rewards desired fan behavior (e.g., viewership, live game attendance, engagement) and integrates various assets from throughout a rights holder’s ecosystem (e.g., access, exclusive content, merchandise, tickets).
What challenges does the community need to overcome, because of these trends? The industry needs more standardization in terms of requirements, integrations, pricing, and packaging. For example, rolling out advertising can be a time consuming and complex process to determine what technology and resources are necessary – and inconsistent commercial terms can make it difficult to compare the pros and cons of various options.
What’s one thing you wish more media pros knew about? Just speaking for myself – I think media pros would benefit from spending more time clearly defining and prioritizing their organizational objectives internally and then form their strategy accordingly.
Often an organization will look to prioritize both reach and revenue at the same time, however this can lead to targeting the wrong KPIs and a misalignment in approach.
What are the top three things that attendees should go hunt down on the show floor to expand what they just learned in your sessions? Streaming has emerged as a fundamental part of a rights holder’s strategy. I recommend attendees speak to the streaming experts to see what role it should play in your growth strategies.
AI is also going to be integral to personalization, while advertising will be key as more dollars transition from linear to digital.
What discussions should they be having with the exhibitors? Attendees should be asking the experts “How can these emerging technologies improve my business, and what is the cost or lift to implement them?” Those factors will be key to implementing a successful digital strategy that meets their needs and objectives.
Alan Wolk, Co-Founder/Lead Analyst at TVREV, will be moderating the NAB Show session “The Future of FAST: Lessons Learned and What’s Next,” Tuesday, April 16 at 3 p.m.
March 17, 2024
Posted
March 16, 2024
NAB Show Amplified: How Audio Entertainment Is Evolving, Expanding, Immersing
TL;DR
Jackie Levine, head of Television and Film at Audible shares her thoughts on evolving trends in audio entertainment.
Jackie Levine, head of Television and Film at Audible, will appear on the The Great Reset: Future of Podcasting from Hollywood and Beyond session on Monday, April 15, at 1:30 p.m. Moderated by eMarketer’s Jasmine Enberg, this panel will also feature Paramount Global EVP of Podcasting and Audio Steve Raizes and SiriusXM SVP of Comedy & Entertainment Radio and Podcasts Adam Sachs.
What excites you about your position at Audible as it relates to the future of this industry?
Jackie Levine: With the explosion of audio in the entertainment world, I’m excited about the exchange of ideas between audio, TV and film.
My initiatives at Audible give fans the opportunity not just to listen to, but also see original and classic stories come to life in new ways. We’ve become a destination for creators to expand their stories in so many ways and create franchises that start in audio.
What are the biggest trends impacting the community/industry right now Finding and originating breakthrough content with the vast number of great shows, movies, and podcasts at our consumer’s fingertips. Socially relevant, high-concept (and/or based on IP) content is currently trending.
What challenges does the community need to overcome because of these trends? We need to continue to strive to find high-concept projects and ideas to embrace the needs/desires of the consumer and make the highest quality material possible. How can we entertain AND make it meaningful? Some of what is performing well right now in-marketplace is a way to escape, for hopeful outcomes, and wins.
What’s one thing you wish more media pros knew about? I wish more media professionals would prioritize quality writers to trust they will deliver. There are so many steps and assurances needed as the media business becomes more and more condensed, which can result in selection decisions to be more fear-based.
Encouraging experienced writers and producers to do what they are great at without second guessing them could help all of us to successfully create more commercial and quality material.
Creator Economy Amplified: The State of the Creator Economy
Watch “Creator Economy Amplified: The State of the Creator Economy.”
TL;DR
Jim Louderback, editor & publisher of “Inside the Creator Economy,” joins veteran journalist Robin Raskin and Tyler Chou, founder & CEO of Tyler Chou Law for Creators, for an exclusive Q&A on the evolving challenges and opportunities within the creator economy.
The creator economy is booming, projected to grow from $250 billion to $500 billion worldwide. However, platforms like YouTube and TikTok are cutting back on creator support, leaving many “accidental entrepreneurs” to navigate the industry’s complexities alone.
Creators are diversifying their income streams beyond traditional platforms. Chou emphasizes the importance of sustainable monetization strategies, including launching products, courses, and consumer goods.
Artificial intelligence offers significant benefits for content creation, such as efficient editing tools. However, the rise of deepfakes and synthetic media poses new challenges for protecting digital identity and intellectual property.
Understanding and engaging with the audience is crucial for growth. New tools like the Superfan app are emerging to help creators connect with their most dedicated fans and develop content that resonates with their community.
In an era where digital content is king, the creator economy has emerged as a vibrant and essential ecosystem, empowering individuals to turn their creativity into careers. Yet, as this burgeoning economy continues to expand at an unprecedented pace, creators find themselves navigating a sea of opportunities fraught with challenges.
As part of NAB Amplify’s “Creator Economy Amplified” series, we sat down with Jim Louderback, editor and publisher of Inside the Creator Economy; veteran journalist Robin Raskin, founder and CEO of Virtual Events Group; and Tyler Chou, founder and CEO of Tyler Chou Law for Creators, to discuss the current state and evolving dynamics of the creator economy, including the challenges and opportunities facing today’s digital creators. Watch the full conversation in the video at the top of the page.
The Growth and Challenges of the Creator Economy
In addition to a thriving YouTube channel, Tyler Chou, The Creators’ Attorney, the veteran Hollywood lawyer represents dozens of creators as part of her “accidental” agency, and advises countless others on strategies for growing their business.
Creators are increasingly on their own, Chou says, describing a climate where “these accidental entrepreneurs” are having their IP stolen left and right and struggle with securing payments from partnerships. “I have two creators right now who have six-figure brand deals that have not been paid. So they’re sort of adrift — on nice yachts — but kind of adrift at sea.”
Louderback, who alongside Raskin is leading the Creator Lab at the 2024 NAB Show, says this lack of support for creators is the biggest shift he’s seen over the past year. “Goldman Sachs says [the creator economy is worth] $250 billion worldwide, growing to $500 billion worldwide. Yet, if you look at the platform’s YouTube, TikTok, Facebook, Snap, they’re all laying off their creator support teams.”
This lack of support means more burnout for creators. “Creating right now is a hamster wheel of effort,” Louderback describes, “And creators are getting burned out and they’re being pushed in so many different directions.”
A Rapid Maturation
Despite this shift, the creative economy is rapidly maturing, Raskin contends, pointing to the astronomical number of people who self-identify as creators, as documented by The Washington Post series, “The Creator Economy,” as well as the proliferation of college courses on the subject. “When colleges have after-school clubs on being a creator, you’re seeing a maturing industry, and it’s maturing very quickly.”
This evolution is driving creators to diversify, Raskin continues. “They’re realizing that a platform like YouTube, TikTok will only take you so far.”
Creators need to know how to make money in a more sustainable way, Chou urges. “Not just heads down, making videos, depending on AdSense and brand deals,” she says. “I help them launch products, courses and actual consumer goods. They’re starting to think about, ‘Okay, I’ve been on YouTube for a few years now. I have a million plus subs. Now what?’”
Embracing AI and New Technologies
Artificial intelligence is also helping to shape the creator economy, both as a production assistant and as a possible foe in the form of deepfakes.
Will AI kill the creator? “If you asked me a year ago, I would say maybe, but now I’m actually very optimistic about how AI can augment and supplement creators,” Chou says, pointing to editing tools like Opus Clip, which uses AI to break long videos up into shorter segments in less than a minute. “With my editor? That would take him a week-and-a-half.”
Chou compares generative AI to streaming in terms of disruption. “I think we all say now that streaming is better, right? Like, the death grip that the studios had on the distribution of content has been as been expanded with streaming,” she explains. “And I think that’s what AI will do.”
But concerns about the challenges of protecting digital identity and intellectual property in an era dominated by deep fakes and synthetic media continue to proliferate, Raskin notes. “Digital identity, protecting of your digital IP is a huge problem,” she says, underscoring the need for robust solutions to protect creators’ work and identity.
Building and Monetizing Audience
Figuring out the best places to cultivate community to help reduce the stress of constantly needing to post across multiple platforms remains one of the biggest tasks creators face, says Louderback. “I think it’s a real challenge. And I don’t see it getting any better right now, unfortunately.”
He emphasizes the importance of understanding and engaging with one’s audience for growth and monetization.
“If you want to grow your audience and monetize, you need to know your audience,” he advises. “So think about your community. Think about your audience. Think about the people who tune in every day and every week to look at your videos. How do you get to know them?”
There are a number of new tools coming out that are designed to help creators understand their audience, Louderback says, including the Superfan app, which helps creators develop a creative network of their most ardent supporters.
These tools, he says, “will help you really understand what your biggest fans want, will help you move those fans in their own communities, and help you figure out how to create things specifically for them that they’ll want to pay for.”
Chou advocates for creators to establish their platforms, emphasizing the importance of maintaining independence from external platforms. “I’m actually in the process of moving my community [from YouTube] on to Mighty Networks, so that I can have more one-on-one connection with them,” she details, describing a newly-launched product from the company called People Magic. “They match people of similar interests together to talk — I think it’s fantastic,” she says.
“I think creators should create on their own land, not on rented land,” Chou advises, encouraging creators to view themselves as businesses capable of generating diverse revenue streams beyond content creation. “Because we have to realize all the platforms are rented. If you have your own community, your own website, your own email list, that’s something that’s yours, and they can’t be taken from you.”
Influencer marketing strategist and creator economy advisor Lindsey Gamble discusses key trends and his predictions for the future.
March 14, 2024
Amy Webb’s 2024 Technology Predictions: Yes, They Include a Lab-Made Human-Machine AI-Brain
TL;DR
Artificial intelligence, bio-technology, and a burgeoning ecosystem of interconnected wearable devices are converging and will redefine our relationship with everything, from banks to information to our own bodies, the Future Trends Institute predicts.
Combined these technologies are ushering in a transformative age from which no one will escape, characterized by an extended period of booming demand driven by substantial and sustained structural changes in society, the economy, politics, and everything else.
We are headed to the merger of silicon-based AI with the organic matter in our own brains, a terrifying thought made scarier still by the fact that the only people in charge are a handful of profit-driven techno-capitalists.
Artificial intelligence, the connected internet of things and biotechnology are converging into a “super cycle” that will impact every segment of our economy, for better and for ill, according to the “2024 Tech Trends Report” from the Future Today Institute (FTI).
Each one of these technology segments is now a General Purpose Technology (GPT) with the same ubiquity and utility as electricity, the report found. Each one has the ability to “radically shape our economy and society,” but now they are converging and accelerating everything.
“The wave of innovation we’re experiencing is so potent and pervasive that it promises to reshape the very fabric of our existence,” CEO Amy Webb writes in the report. “From the intricacies of global supply chains to the minutiae of daily habits, from the corridors of power in global politics to the unspoken norms that govern our social interactions.”
The FTI report covers a whopping 695 technology trends including AI, the metaverse, robotics and Web3 across 16 industries including entertainment, sports, news and information.
It is the three macro trends that dominate, and the catalyst is AI.
“GPTs already connect in some way to every other technology that exists,” Webb explained in a presentation at SXSW. “It connects to science, to space exploration, it connects to sports, to every business to every single person and every facet of our daily lives in ways that I think are exciting, and good, and absolutely terrifying.”
The Transition Generation
Here’s the problem: the people in charge making decisions that govern our future are stuck in the glare of the headlights.
“Uncertainty is crippling our leaders,” Webb said. “They are now making decisions out of fear and the fear of missing out.”
Speaking to the SXSW audience, she said, “You don’t know what to do about AI, you don’t know what to think about AI. A lot of you haven’t even started digital transformation, because it’s costly and expensive and it takes time. You’re worried if you can hit your revenue targets, and if you’re still going to have a job. And by the way, we haven’t even gotten to supply chain issues and climate change and geopolitical problems and the threat of a third world war.”
She characterized this era of uncertainty as “Generation Transition,” in which we are all a part.
“Our society is going to look very, very different after this transition has completed its cycle,” she said.
“The foundation of the tech super cycle is the next era of computing and it’s going to be embedded into every single thing that we do, all of the time, and all the products and services that you use.”
Accountability in AI
The AI section of the FTI report is enormous, reflecting the seismic impact the tech is having on everything. There are more than 100 trends. It’s 150 pages long and covers languages, trust, security, and regulations.
Accountability, or the lack of it, is going to continue to be a huge issue. “Bias is still not table stakes,” she said. “Fixing this problem is challenging and enormous, because the models have already been built.”
“We keep hoping for change but we’ve put the wrong incentives in place. The incentives are in place for speed and scale, because that’s where the money is. The problem of bias is not going to get better. It’s going forward, it’s going to get worse.”
A second AI trend is a shift from having to prompt AI tools with literal descriptions of what you want, to something a lot more esoteric. OpenAI’s Sora points the way.
“In two years, you’re not going to need specificity anymore,” Webb predicted. “Instead, you’ll just start with a very broad general concept, you will brainstorm alongside an AI to continually iterate and refine until you get whatever it is you want; a concrete framework, technical specifications, a new business plan. You don’t even have to start with that fully formed idea anymore.”
Where AI links into the “connected ecosystems of things” is data. If AI is the “everything engine” that engine is going to need data. And lots of it. “Sometime in the next two years, AI will have run out of the internet,” Webb said. “We will have used up all of our high quality text and data, which will slow down AI is progress.” So companies are inventing new devices to sell to us so they can get more data in.
“The problem is that most of the training data that we’re using to train AI is online (Wikipedia, Reddit, books, spreadsheets, etcetera). AI can’t actually interact with normal real people. So we don’t just need more data going forward, we need more types of data. Which means that large language models aren’t enough.”
A Constellation of Sensors
What’s coming next are large action models which pull in sensory data, and visual data from anything with a sensor connected to the internet.
“We’re about to be surrounded by millions of sensors that are always on and also always on us,” Webb said. “They can collect multiple streams of data at once.”
The constellation of sensors includes wearables, XR devices, the Internet of Things, the home of things, smart cars, smart offices, smart apartments, sensors everywhere.
“This is the network of interconnected devices that communicate and exchange data to facilitate and fuel the advancement of artificial intelligence. We’re about to see a Cambrian explosion of devices.”
One of the more obvious ones is Apple Vision Pro. Even if it is not an immediate hit with the public, Apple’s launch has jumpstarted vigorous XR headgear development from Meta and Samsung, Google, Qualcomm and more.
Webb calls them “face computers,” a deliberately unattractive name for what she believes will be their nefarious use by Big Tech.
“It is a computer that you are strapping to your face… designed from your point of view to help you interact with the world differently. From my point of view, they’re being designed to read your intentions. They can do that in part by reading and predicting the movement of your pupils.”
She isn’t the first commentator (Cathy Hackl is another) to point out that the valuable, vast and intensely private data that these XR devices are going to be hoovering up
“Companies are highly motivated to build large action models and to get more types of data in. What’s coming is a battle for face supremacy. Companies vying to get you to wear their hardware. There’s no reward for the company with the most secure, most privacy assured device.”
Digital Merging with Organic
Enter biotech, the third big trend in the self-reinforcing super cycle. Webb makes the logical but still disturbing connection that the next stage of AI is the merging of digital supercomputers with human tissue.
“Biology processes information in a way that silicon can’t. In other words, if we’re trying to build machines that can think and behave like we do, we literally need to make them more like us.”
As much as last year was a big year for AI, it was actually a much, much bigger year for biotech — only we hear less about that.
Webb paints this scenario as fact, not sci-fi. “Sometime in the next decade, AI will be working alongside an organoid intelligence. An organoid is a tiny replica of tissue that functions and is structured like the organ.”
Scientists already work with stem cells in the lab and can grow meat from a tiny amount cells. It’s a short step to making an organic computer.
Researchers in Melbourne already have. “In 2021 they made a miniature brain that works like a computer,” Webb said. “They attached it system electrodes and then and then they taught it how to play Pong [the old school video game]. Organoid intelligence or O-AI uses brain cells for information processing, leveraging their inherent capabilities beyond silicon-based systems. What is on the horizon, my friends, are bio computers made out of human brain cells.”
Researchers at the University of Sydney have made a device like a cap that can record your brainwaves and convert them into text using an AI model. Theoretically, you can send a though as a prompt to Mid-journey. You could hook the device to a 3D printer.
“Which means that going forward, the command line is your thoughts.”
It may take a while before O-AI can compete with traditional computers using AI but its coming, if you believe Webb.
Eventually, biological computers are going to be faster, more efficient and more powerful than conventional computers today. They will take a fraction of the amount of energy in order to operate. She predicts. “Biotechnology will move us past silicon-based computing systems.”
Techno Optimism or Techno Authoritarianism?
And guess who is going to be in charge? Scary thought… Not you or I.
“The technology super cycle is concentrating power right now among a dangerously small group of people who hold significant power and influence in society, in government and politics, in our economies, Because they control our tech resources, because they’ve amassed great wealth and because they control how we communicate ideas with each other.
Webb warned that the “tech messiahs” are going to try and save us from the technology super cycle. “They call it effective altruism or techno optimism. But from the outside, it looks a heck of a lot more like free market techno authoritarianism. And I’m not okay with that.”
Another scary thought. How about an AI that could predict when you will die? Danish researchers have already built it. It may not be accurate today, but in short order it could be.
“If I’m a bank, considering whether to give you a 30-year mortgage for a house that you want to buy, it seems like I might want to build a large death model. Something like that might come in handy.”
Can anything be done? Like other liberal or perhaps techno pessimist commentators Webb doesn’t have a concrete answer beyond hope.
“We don’t have to submit. We don’t have to give up our agency. We can support each other through this great transition,” she insists.
“We don’t need someone to save us. We just need to do better planning for the future. Our elected leaders need to look forward, not backwards. I don’t care how old they are. You need to you need to establish a Department of Transition. We are Gen T. You need to do this.”
FTI’s 2023 Tech Trends report says entertainment has reached a tipping point, with new technologies enabling novel forms of expression.
March 22, 2024
Posted
March 14, 2024
NAB Show Amplified: Actionable Insights for Brands Navigating the Creator Economy
TL;DR
Modern advertising strategies often require brands to dip their toes into the Creator Economy, however shallowly. But they must do so in a way that understands both how and why content is consumed and in which format.
Also, as the lines blur between legacy media formats and user-generated content, brands cannot afford to ignore the significance of the Creator Economy.
Do you know “what’s the right strategy to hit who you’re actually targeting?” Deloitte Principal Dennis Ortiz says understanding consumer habits and the underpinnings of the Creator Economy are crucial to brand success.
“Different generations use different platforms, whether it’s legacy media or social media, for different reasons, and coming up with the right strategy to engage cross platform is likely going to hit optimal outcomes for advertisers,” Ortiz explains in an interview with NAB Amplify. Watch the full video (below) or read on for more insights from the conversation.
It’s important to understand the “why” behind certain types of media consumption.
Ortiz says, “Legacy media tends to be about immersion and experience.” For “Gen X and the older generations, they actually look at legacy media — TV shows, movies, etc. — as the most immersive form of entertainment.”
“Social media, on the other hand, is really more about connection and convenience,” he says. “Social media is driven by creators. And we actually have studies that suggest that, indeed, consumers are very attracted to creators that have similar interests, or shared ideas and values, and spend most of their time on social media following those types of creators that have those similar interests and values.”
One such study is Deloitte’s recently published The Creator Economy in 3D, which explores how social media, content creators and influencers are changing the way consumers interact with brands, media and advertising.
“A big part of this is not just the creators, but also the ability of the creators to create another forum for brands to connect with consumers,” Ortiz explains. “We actually have some research that would suggest that three out of five consumers are more likely to engage with a brand if one of their favorite creators actually recommend that brand.”
Deloitte’s study found the average consumer has five favored content creators, which they said “are the social media equivalent of a favorite TV show, perhaps with less regular schedules.” And fFor Gen Z, that number increases to 10.
This increase in social media time means the average person is engaging with many different voices in content from leading studios, micronetworks, and amateur user-generated content, to full-time content creators.
To be the most effective in driving followers’ purchasing decisions, the Deloitte study says “relatability” is key for content creators, meaning that they share a similar socioeconomic background and/or have hobbies/interests in common.
Also notable: “77% of consumer-creator relationships [Deloitte] surveyed could be traced to either a shared interest or desire to learn from that creator.” And for Gen Z specifically, aspiration is key; 45% indicated they follow “creators out of admiration for their lifestyle.”
“Online communities, Ortiz says, “are going to become increasingly important. That is why individuals are gravitating towards these platforms, because they’re able to find individuals or creators that are very similar to them or that they aspire to be.”
Understand Who Wants What
“Social media and user generated content will increasingly capture consumer attention,” Ortiz explains. In fact, “UGC is the primary form of entertainment for the younger generations.”
The study notes that “Gen Z and millennials spend 26% to 37% more time on social media than previous generations,” reinforcing those platforms as a good way to reach those target audiences.
And remember that the oldest members of the Millennial cohort are now 43 years old, and the youngest Gen Z are now 12 years old, according to Beresford Research, making both demos extremely desirable for many brands.
“I think that says a lot about where their consumption habits are, and where advertisers should be spending more of their dollars to capture that specific audience,” Ortiz says.
And why are Millennials and Gen Zers gravitating to social media? One explanation, per Ortiz, is that “social media platforms are now bringing a broader set of entertainment options to the table.”
Blurred Lines and New Opportunities
Take YouTube, for example. Ortiz says, “Now they have YouTube TV. They have the largest music library out there. They’re in podcasts, as well as gaming. YouTube now, as per a Nielsen report that came out in January, is the largest streaming service out there. And they’re … the fourth largest cable provider out there.”
With new generative AI tools entering the market all the time, UGC is also set to grow exponentially, and “that could continue to drive more eyeballs to the platform[s],” Ortiz predicts.
“The lines are blurring between legacy media, and what will be considered social media,” Ortiz emphasizes. “If you ask someone in the Gen Z age group… what movie they watch, they may be talking about a Mr. Beast movie that they saw on YouTube. Whereas if you ask some of the older generations, they might be talking about ‘Oppenheimer,’ which they saw in the theater or saw on streaming.”
Social media companies are no longer just competing with each other. He says, “What started as a creator-driven platform is now becoming an all encompassing entertainment platform. And so the legacy media companies should be thinking about ‘how do we play in that space, of being able to capture audience that might be diverting to other forms of media?’”
And companies should remember that “there is a symbiotic discovery relationship between social media and legacy media,” Ortiz says.
For example, “We have some research that suggests that consumers are driven towards specific television shows, as well as movies, based on what they’ve heard on social media.”
This Era of Convergent TV Requires a Cross-Media Ad Marketplace
TL;DR
The convergence of linear andstreaming is redefining advertising planning into a cross-media marketplace.
The overall message is that TV advertisers should be focusing on digital formats, but they shouldn’t abandon linear completely.
In a world where linear broadcast and cable represent just half ofoverall TV viewing, buyers and sellers needto operate with more agility and flexibility.
On the surface, TV viewing hasn’t changed. Familiar network hits like NCIS, Grey’s Anatomy, and Criminal Minds are at the top of the charts of most watched shows by Americans. This of course hides the fact that how people are watching on has changed radically.
While linear and cable TV represent roughly half of viewing time the other half is solidly streaming.
The age of Connected TV is here shaking up the planning and ad buying process.
“Media planners should be paying attention to both digital and linear TV, but attention is obviously moving toward the former,” states Insider Intelligence, which provides analysis of viewing trends.
Daily time spent with linear TV will be down 3.7% this year from 2023, totaling two hours and 55 minutes, while time Americans spend with digital video in particular will be up 5.7% this year to total three hours and 50 minutes.
“That means TV advertisers should be focusing on digital formats, but they shouldn’t abandon linear completely,” it suggests.
Subscription OTT video is becoming a bigger part of people’s media diet. High use on platforms like Netflix and Hulu are contributing to the one hour and 49 minutes that people will spend watching subscription OTT this year, per Insider Intelligence forecast. Both of those platforms now have ads, as do other large players like Amazon Prime, Disney+ and Max.
“These platforms are intensifying the efforts to attract more users into their platform with their original content or exclusive content,” said the firm’s forecasting analyst, Jasmin Ellis.
Amid streaming’s success, linear TV is still relevant. Time spent is in decline, and that trend won’t change. But the drop-off is happening slower than our forecasts initially anticipated due to viewing habits of people 35 and older, many of whom haven’t yet cut cords, Ellis said.
Calling the merger of linear and streaming “Convergent TV,” Nielsen says that media planners need to strike the right balance “of contextual, advanced targeting, and one-to-one advertising.”
In its latest report, the media measurement body says Americans spend between four-and-a-half to five hours a day with TV (live, time-shifted and streaming), but they now spend just as much time with other media.
“We are entering the world of Convergent TV, where the line of what is and isn’t considered ‘television’ blurs,” the report reads.
It is the combination of linear and streaming into a “seamless viewing experience” that is transforming the TV landscape.
“Both the way that people consume content and in the way that media companies produce and release new programming to meet that demand.”
For example, streamers are releasing new titles (both acquired and original) throughout the calendar year, as well as dropping whole seasons at once, which raises the public’s expectations for new, bingeable TV programming outside of the traditional TV season.
Nielsen says streaming usage (whether from digital-first or legacy TV companies) passed cable for the first time at the end of 2022 and is now the dominant form of TV viewing in the US, accounting for nearly 40% of total TV usage.
“For advertisers and media agencies, this is a clear reminder that TV remains a central piece of the marketing mix,” says Nielsen.
“It’s a different medium than 20 years ago when American Idol routinely had 30 million viewers tuning in, but it’s just as relevant today as it ever was. In fact, it’s a more complete full-funnel channel now because it’s increasingly more scalable and addressable, and not just for the largest advertisers.”
Nielsen points out that 60% of medium-sized brands expect to spend more on streaming on CTV, but 35% anticipate they’ll spend more on linear as well.
“As such, measurement silos and incompatibilities should be a thing of the past. Advertisers want to understand how their TV buys are performing not as stand-alone investments but in the context of their cross-media campaigns.”
Cue Nielsen’s own pitch for a measurement standard. “For measurement, data and analytics companies, a top objective now is to develop an adtech ecosystem for TV that combines the best of linear and digital and gives its stakeholders the tools they need to buy and sell with confidence.
“At the heart of that system, the industry needs common, cross- media metrics: a way to measure ad delivery and performance consistently across platforms. We have a few ideas.”
Media planning is not just about TV or CTV, though.
YouGov has charted American media use over the last five years and comes up with the unsurprising conclusion that online is now the primary advertising channel, dominating the attention of consumers.
Social media remains the top channel, YouGov states, but online video and streaming/on-demand TV are registering highest growth. Even among 45+ Americans, social media is now close to eclipsing TV as the top media channel for news and entertainment as much as networking.
Ad tech solutions vendor Broadpeak says blending media measurement and market research is one way to effectively plan for streaming ads.
March 14, 2024
Posted
March 14, 2024
What Are the Best Technologies for Targeted Ads on Streaming TV?
TL;DR
As the video streaming market evolves, all eyes are focused on improving monetization and delivering an outstanding quality of experience, says ad tech solutions vendor Broadpeak.
Blending media measurement and market research is one way to effectively plan for streaming ads, the company says.
In-context advertising testing can track how well ads maintain audience interest and measure ad impact on things like product awareness, consideration and purchase intent, within a streaming environment.
With almost 100% of US TV households subscribing to a streaming service, media planners must incorporate Connected TV and streaming video into their ad buying mix. A decade ago, the medium lacked advertising potential, but now many of the large streaming services are growing their ad-supported user base, and FAST platforms are gaining traction too.
“To effectively enter this arena, media planners need to take a balanced approach. Instead of just comparing it to TV, it’s important to understand its unique performance abilities and adjust plans accordingly,” advises Heather O’Shea, Chief Research Officer at marketing consultancy Alter Agents, writing in Advertising Week.
Unlike TV ads with fixed schedules, streaming allows for more flexibility in ad placement and targeting, more similar to digital media. Media planners should use this to deliver personalized messages that connect with viewers, O’Shea says.
Streaming video certainly presents exciting opportunities for advertisers, but many unknowns remain. For instance, measuring the effectiveness of streaming ads poses a unique set of hurdles, given the fragmented nature of the landscape and the lack of standardized metrics.
“Many media planners still have a lot of questions about how to best implement advertising on streaming platforms,” O’Shea says. “This medium lands somewhere in the middle between traditional TV, which has primarily been used as an awareness tool, and digital advertising which is heavily utilized as a direct response tool.”
Her focus is on gathering the data and insights planners need for defining the best approach. One tactic where Alter Agents have had “significant success,” she says, is using in-context advertising testing. This method can track how well ads maintain audience interest and measure ad impact on things like product awareness, consideration and purchase intent, within a streaming environment.
“It results in a deep set of insights that media planners and advertisers can put to work right away.”
Technical solutions to digital advertising will be represented across the exhibition and conference at NAB Show. Among vendors with product in this area is Broadpeak. It brings its targeted ad insertion solutions to the show and will demo a “disruptive” feature that boosts the performance of targeted advertising for video streaming services by allowing viewers to click on banner ads and receive a notification on their phone, guaranteeing clicks for advertisers.
“Today we are at an inflexion point where ad budgets will flow back to TV,” says Pieter Liefooghe, business development director at Broadpeak. “First of all, we are now able to bring the targeted advertising technology from digital advertising to the TV screen, by either implementing client-side ad insertion, server-side ad insertion, or a combination of both,” he explains.
“Secondly, due to how personal data for targeting purposes is collected and used in digital advertising, there are increasingly privacy concerns and laws that make this ad medium less attractive compared with targeted TV advertising.”
Broadpeak has also issued a “Guide to Ad Insertion,” including information on how digital advertising has impacted the spend of TV advertising. The Guide carries an overview of connected TV advertising, which has seen increased interest from advertisers, as well as an analysis of ad partnerships that can further increase the business case for TV service providers to implement an advertising solution.
At NAB Show 2024, the company will also showcase a new Spot2Spot feature.
“Today, most targeted ad technologies are limited to full ad break replacement, minimizing value for the targeted audience,” CEO Jacques Le Mancq explains. “With the Spot2Spot feature, content owners can replace specific spots within the ad break. Broadpeak will demo spot-level ad replacement for addressable TV and comprehensive ad tracking for both replaced and non-replaced ads.”
Alan Wolk, Co-Founder/Lead Analyst at TVREV, will be moderating the NAB Show session “The Future of FAST: Lessons Learned and What’s Next,” Tuesday, April 16 at 3 p.m.
March 18, 2024
Posted
March 14, 2024
How to Maximize Ad Revenues With SSAI
As we approach NAB Show, OTT advertising is on the industry’s mind like never before. Nearly all the major broadcasters and streaming providers have embraced some form of advertising to increase ARPU and move to business models that are sustainable over the long term.
Server-side ad insertion (SSAI) is the central cog in OTT advertising because it joins streaming technology with adtech. Any issues with the SSAI and valuable advertising revenue is lost. On the other hand, SSAI has the ability to transform advertising revenues and empower providers to compete on the digital stage.
The key to maximising SSAI revenues is to allow broadcasters/customers to create an ad product that boosts the traditional benefits of TV by adding the modern benefits of digital advertising.
TV’s traditional benefits
Mass reach: Nothing offers mass reach in a short period of time like TV. It also has the ability to drive discussion and get in the public psyche – it creates “water cooler” moments that are increasingly hard for advertisers to find elsewhere.
Quality of delivery: Nowhere else can advertisers get a broadcast-quality 15-30-second ad with such high engagement and view-through rates.
Digital advertising benefits
Addressability: In the digital realm, advertisers expect addressability. Broadcasters and streaming providers must convince brands to increase spend on TV rather than YouTube or TikTok – which both offer fantastic targeting.
Programmatic: There is great potential to increase fill-rates by adopting programmatic. It helps secure the highest possible CPM for each available ad spot.
Measurement: Real-time measurement of ad views is essential for advertisers to tweak and improve their campaigns. It’s what they do across other digital channels so they want to do the same with OTT.
SSAI has the power to deliver an appealing blend of both worlds: the mass reach and viewer experience of TV and the advanced advertising benefits of digital.
Implementing SSAI to unlock the full value of OTT advertising can be complex. Here are the key considerations for broadcasters and streaming providers to enhance their advertising offerings and grow revenues:
Scale and Reliability
TV’s mass reach creates valuable water-cooler moments that advertisers are increasingly struggling to find elsewhere.
Live streaming and major sports have mass appeal and are therefore highly valuable. But applying addressability and one-to-one measurement at scale is impossible without a dynamic prefetch extension to your SSAI. Otherwise it is highly likely that ad-decisioning servers will time out and fail to return a response.
It’s important to be real about concurrency. Concurrency means the number of viewers watching at the same time. It’s not an average over a day, or a period of play, it’s minute-by-minute.
Mass reach is not only the domain of live streaming. VOD creates water-cooler fortnights. Some shows are a must-watch. Remember how Tiger King made Joe Exotic a household name in the space of a fortnight? Even though viewers are not pressing “play” at the same time, they are doing so in a short timeframe and putting extra demand on the streaming and advertising tech.
Maximizing inventory
In live sports, many advertising opportunities are missed because they’re so challenging to access. A half-time ad break in a soccer match can be planned for. The timing is dependent on the referee’s whistle, but the duration of the ad break and session ID is known in advance.
But what makes sport so compelling are the twists and turns, in other words: the unexpected. If a World Cup soccer match goes to penalties then all of a sudden an unplanned, but incredibly valuable ad break, is created immediately before the first penalty kick. We’ve seen audiences double between the end of extra time and the start of penalties.
Dynamic prefetch with contingency ad pods is essential to capitalise on these highly engaging and valuable moments.
Campaign management
SSAI is not simply a case of switching on a tap and letting the ads flow in. It must be integrated closely with the adtech ecosystem to deliver ad operations teams the right data to manage their campaigns. In order to consistently deliver the highest fill-rates, real-time measurement of ads viewed must be surfaced within a live 24/7 campaign dashboard.
In OTT, too much advertising is measured by ads stitched. That method is not sufficient for ad ops to make informed decisions about their campaigns.
UX and complexity
SSAI delivers a consistent, seamless viewer experience across all content types and devices. It effectively replicates the experience of traditional TV. It also goes some way beyond that.
SSAI must support all kinds of UX features, from clickable ads to scrubbing. Increasingly, viewers are expecting longer DVR windows, meaning the SSAI must be able to support whatever business logic is required to maximise the potential of advertising in live-rewind mode.
As you can see, SSAI is capable of delivering a huge amount of added value to OTT advertising propositions. As the industry’s reliance on advertising revenue grows, it is increasingly important that broadcasters and streaming providers offer the best possible ad product to the market in order to deliver better value and appeal to more advertisers.
Learn from these industry experts as they navigate the world of FAST content, user engagement and monetization.
Here, Wolk takes a look at the FAST landscape.
What are the biggest trends impacting FAST right now? Alan Wolk: Three of the biggest trends we are seeing are:
Better User Experience: This includes everything from better integration between linear and on-demand, improvements to the interface, better recommendations and personalization.
A Push To Quality: Now that FASTs are growing up, they are getting rid of their lower-performing content and replacing it with more premium content. The fact that studios are now licensing these sorts of shows to the FASTs helps too. This means that the notion that anyone who owns some content can magically stand up a FAST channel and monetize it is coming to an end.
More Local Content: FASTs are adding in local news and other local programming. Most is coming from local broadcasters, the rest from streaming-only services. But overall, the demand for local programming is high.
What are the biggest challenges those working in FAST have to overcome? The notion that it’s not “real TV” is a big challenge. So is the lack of transparency around data. Advertisers feel they are not getting enough feedback on where their ads runs and rights holders also feel they’re not getting adequate viewing stats.
That needs to change if FASTs are to succeed. The other challenge is that there is not much consistency across the FAST ecosystem.
Some of the bigger players can sell fully intact channels to the FAST services with shows that run at the same time across all of them. But for most, the FASTs are curating their own channels, which makes it challenging for advertisers.
What is one thing you wish FAST professionals had a better understanding of? There is so much confusion about nomenclature. It’s as if people don’t seem to realize that there are FAST services like Pluto TV and The Roku Channel, and that those services have linear channels (aka “FAST Channels”) and on-demand programming. Some people insist on talking about the on-demand content as if it is separate or referring to “FAST Channels” as if they were somehow detached from the services they ran on.
But if you’re an advertiser, your inventory runs on both linear and on-demand—it’s not as if they sell them separately. There’s also a lack of understanding about the different types of FAST services, the idea that FASTs attached to device OEMs are very different than FASTs owned by the major media companies and the growing range of independent FASTs. And that there is also considerable overlap between all three.
What are the top three things that attendees should go hunt down on the show floor to expand what they just learned in your sessions? I would look at what the OEMs are doing with their FAST services- how are they integrating them into the main interface so that they are front and center of the viewer experience.
Talk to the independent FAST services and those attached to major media companies as well.
Finally, I would attend some sessions on advertising, since FAST is all about advertising.
What discussions should they be having with the exhibitors? Ask them about what they are doing to differentiate themselves from other FAST services. What is their competitive advantage for consumers? For advertisers? For content owners?
Both Consumers and Creators Need to Take Responsibility for AI Content
TL;DR
Adobe CEO Shantanu Narayen talks about how the company is incorporating AI and its work to tackle misinformation, urging creators to either use AI or miss out.
Narayen acknowledges the responsibility of companies like Adobe to mark the provenance of content generated by AI, but puts the onus on consumers to be more aware of the media they consume.
He doesn’t believe artists will be overtaken by AI, only that Adobe will work with AI to build new tools for creators — but then what else is he going to say?
“If you don’t learn to use AI you’re going to be in trouble,” declared Adobe chair and CEO Shantanu Narayen, who also put on the onus on the general public to learn more about AI an to question the veracity of the content they are served up as fact-based news.
He was speaking to Washington Post tech columnist Geoffrey Fowler in an illuminating exchange about how the tech vendor is seeking to balance its commercial aims with tackling misinformation.
There’s also an existential threat to Adobe itself. Won’t generative AI simply erode any business the vendor has to market its own content creation tools?
Narayen responds. “I think [AI] is actually going to make people much more productive, and it’s going to bring so many more marketers in small or medium businesses into the fold [to be able to use Adobe’s tools even more easily to create content],” he says.
“AI really is an accelerant. It’s about more affordability. And it’s about more accessibility. And Adobe is always one when we solve problems, and allow more people into the field.”
He maintains that GenAI was a good thing, on the whole, both for creators and for Adobe itself:
“It is going to be disruptive if we don’t embrace it and we don’t use it to enable us to build better products and attract a whole new set of customers. But I’m completely convinced that this will actually be an accelerant to further democratize technology, rather than just a disruption.”
Fowler asks how Adobe can convince the creatives who buy its tools that these tools — and Adobe’s AI Firefly — are not in the process of replacing them with AI.
“I’m convinced that the creatives who use AI to take whatever idea they have in their brain are going to replace people who don’t use AI,” Narayen replies.
“If people don’t learn to use it, they’re in trouble. I would tell young creators today that if you want to be in the creative field, why not equip yourself with everything that’s out there that enables you to be a better creative. Why not understand the breadth of what you can do with technology? Why not understand new mediums? Why not understand, the different places where people are going to consume your content. A knowledge of what’s out there can only be helpful rather than ignoring it.”
Keeping the creator community at the center of its brand, Adobe has opted to differentiate itself from other AI tools developers, like Stable Diffusion or OpenAI, in training Firefly on data that it owns or that creators have given permission for use.
“I think we got it right, in terms of thinking about data and in terms of creating our own foundation models and learning from it,” he says. “But most important in creating the interfaces that people have loved. I think we’ve been really appreciated by the creative community for having this differentiated approach.”
The conversation shifts to the dangers of AI, and how much of a threat AI poses to truth. Fowler notes that people have long been able to use Photoshop “to try to lie or trick” people into believing misinformation, so what’s different with GenAI?
Narayen says technology has always a had unintended consequences. “It’s an age-old problem, [but where] generative AI is different is the ease at which people can create content. The pace at which it can be created, is going to dramatically expand,” he says.
“So it’s incumbent on all of us who are creating tools, and those distributing that content, including the Post, to actually specify how that piece of art was created, to give it a history of what was happened.”
“The challenge — and the opportunity — that we have, is that this is not just a technology issue. Adobe and our partners have worked to implement credentials that identify, definitively, who created a piece of content and was AI involved and how it altered along the way. The question is, how do we as a company, an industry and a society, train consumers to want to look at that piece of content before determining whether that was real or not real,” Narayen says.
“We’re going to be flooded with more information. So it’s the training of the consumer, to want to interrogate a piece of content, and then ask Who created it? When was it created? That is the next step in that journey.”
Fowler pushes back on this, quizzing just how much onus should be on the user, or viewer, and how much responsibility should publishers or AI vendors share. He points out that Adobe was selling AI-generated images of the Israel Gaza war and that Adobe said the images were released because they were labeled as made by AI. “But is that just proof that the general public is not adequately prepared to identify generative AI images versus originals,” he said.
“The consumer is not solely responsible for all obligations associated with trying to determine whether it’s AI or not,” Narayen said.
“Certainly, distributors of that content and the creator of the content also has a role to play [but] the consumer has a role to play as well because they’re the ones who are at the end of the day consuming the content.”
He emphasizes the need for consumer education and insists that consumers take some, not all, but some responsibility for how they interpret the content they view, hear or read.
“The more a company like the Washington Post promotes this notion of content credentials, [the] education process will increase.”
Narayen also defends Adobe by saying it is not a source for news. “Adobe only offers creative content, we do not offer editorial content. And what people were doing was trying to pass off what was editorial content or actual events as creative. So we have to work and moderate [or] remove that content.”
Fowler challenges that idea of content credentials are welcome to those who view it as a good idea but it still leaves open the misuse of AI in content generation by bad actors. What can be done about them?
Narayen doesn’t really have an answer other than widening the education of the public. “The good guys are going to want to put content credentials in to identify their sources or identify what’s authentic. I think if we can continue to train other consumers to beware in terms of content [provenance] that’s one step in terms of the evolution of how we can educate people.”
He is optimistic about winning the battle. “We will get through this in a responsible way and it will both make people more productive and will make them more creative. We will respect IP, perhaps in a different way than it was done when it was just a picture, but it will happen I’m confident that. Companies and governments will work together to have the right thing happen.”
AI has a visual plagiarism problem, raising legal challenges and the urgent need for industry collaboration in ethical AI development.
March 14, 2024
Posted
March 13, 2024
NAB Show Amplified: What Content Creators Need to Know for a Volumetric Video Future
TL;DR
Video content is transitioning from 2D experiences on flat screens to immersive spatial experiences, says Supersphere founder and CEO Lucas Wilson, necessitating creators to adapt to technologies like virtual and extended reality.
The industry awaits the emergence of a “YouTube for spatial experiences,” Wilson says, a platform for sharing 3D content, expected to be developed by tech giants like Meta, Apple, or Google.
Supersphere will be introducing ArkRunr, a platform for virtual production that simplifies creating immersive environments without the high costs and complexity of current VP methods.
We are entering a world where video will no longer be captured and presented as a 2D experience on flat screens, but as spatial experiences enjoyed with virtual or extended reality head gear.
The building blocks are being put in place and creators of all kinds better get ready, says Lucas Wilson, founder and CEO of Supersphere.
“Immersive experiences matter because the user is more engaged. So the right question is not why has VR failed to take-off but in what direction is content going in? All the trend lines point to content being more immersive.”
At Supersphere, Wilson has helped transform the live performance space by creating hundreds of XR, AR, VR and MR experiences for music artists ranging from Paul McCartney to Billie Eilish and Post Malone. The visionary exec, who will be presenting “Virtual Production for Content Creators” at the 2024 NAB Show, has seen the future and says the next evolution of video and immersive experience is volumetric.
“What doesn’t exist right now — and maybe Apple will create it soon — is the YouTube of spatial experiences. We’re ready for that,” he says.
The ability for anyone to capture and share experiences in the 3D world is coming, he says.
“Meta or Apple or Google will come up with the first true spatial distribution platform. The YouTube of the spatial world. I think that’s where we will all want to live.”
Arguably commercial VR really only began in 2019, when Meta released Quest, so it shouldn’t be surprising that VR has not moved beyond being a niche industry. Yet Meta has sold 20 million of its headsets and the Vision Pro, albeit in limited run, sold out in hours.
Wilson views VR as part of a continuum of immersive experiences which has taken us in short order from analog to HD to UHD TV via stereo 3D TV. “Each tech advance is aimed at delivering a more immersive experience but while TV set engineering and content distribution has been around for 80+ years, virtual reality is only just getting started,” he says.
“Headsets are a temporary anomaly. I think most people in the industry agree with that. The real answer will be when we have VR glasses.”
Meta’s RayBans are one example. Another is being developed at Brilliant Labs. They are lighter, more comfortable, less obtrusive and, frankly, cooler.
“For a start, they won’t make people want to punch you if you’re wearing them in public,” says Wilson. “Headsets are always going to be a niche market because there are only a certain amount of people that will actually want to strap a device to their face, no matter how cool it is.”
He predicts that in a couple generations of Qualcomm chip development the electronics will be small enough to fit inside AR glasses.
“Once that happens, with VR headsets in eyeglass form, then I really believe that our fundamental world changes in terms of how we communicate,” Wilson foresees.
“Kids already live and breathe by sharing content and communicating via digital devices. It’s natural to them but they still share 2D images. In the next couple of generations [of consumer electronics] they’re going to start sharing Volumes, they’re going to start sharing spaces and environments that they can interact with each other in. Once that happens, then why would you ever share a 2D photo again?”
Supersphere is getting ahead of the curve by bringing to market a new content creation tool capable of manipulating video and virtual worlds in a native 3D space.
It is called ArkRunr, and it launches right after NAB Show in April, initially targeting virtual production.
Wilson believes there’s a huge market for VP-style content creation but without the cost and paraphernalia of conventional LED volumes, camera tracking systems, and VADs.
“Anybody who has worked in virtual production knows that it is complicated, expensive and time consuming to achieve a good outcome. Moreover, there are no tools that exist in the mid-budget to creator a range for that kind of work. So, we built our own.”
ArkRunr has in fact been used by Supersphere on lots of shows “with major artists,” so successfully, in fact, that Wilson decided to commercialize it.
Wilson calls it a Spatial Performance platform. The software ingests live video feeds (from a smartphone, for example) of an artist performing on a stage, or even their bedroom, and wraps it in a virtual environment complete with interactive lighting. The platform runs on Windows and requires the computing power of “an average gaming laptop.”
“Every musician, every creator streaming from their bedroom, their living room wants to up their game. This allows them to broadcast in custom XR, AR or VR scenarios with interactive lighting,” Wilson continues. “Another big market for us is corporate. You could imagine a virtual TEDx stage, a video presentation and dynamic lighting for a corporate keynote with high production value.”
With generative AI tools added to the mix, the ability to create digital content is going to be supercharged. Supersphere, for instance, has incorporated AI into ArkRunr to create lighting for specific musical styles.
“We are training [our algorithms] on thousands of hours of real lighting shows according to musical genre.”
Supersphere’s ambition is to be the “Live Nation of the immersive world,” says Wilson, “because we are licensing virtual representation rights for spaces that exist today and those that no longer exist.”
He elaborates, “If you want to play in the Cavern Club with the Beatles in the 1960s or with the Bee Gees in Studio 54 then we can bring them back to life. If you want to imagine the Cavern Club in a cyber-tech future, you can.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 13, 2024
Could AI Deconstruct Hollywood, Then Build It Anew for Everyone?
TL;DR
The radical production efficiency of AI is anticipated to have resounding creative implications.
On the one hand, AI will collapse the traditional content creation industry and conventional creative and technical roles dependent on it.
On the other, AI will be in the hands of literally anybody, opening new and unforeseen storytelling possibilities that could benefit diverse communities. Who could argue with that sentiment?
Filmmakers and artists are grappling with what AI means and no one can quite decide if it’s a good thing or a bad thing.
There are many apocalyptic scenarios for the film and TV industry, the most extreme of which sees the entire studio system (including even broadcast) collapsing, replaced by AI tools that can perform every function.
Yet this is also depicted as a double-edged sword we should welcome as the ultimate in democratization and infinite storytelling possibility.
This optimistic view appears just that — optimistic verging on the fingers-crossed — as experts look for a silver lining in the inevitable technology change sweeping the industry.
Perhaps we should even be making a dividing line in human history: Before AI and After AI.
As photoreal video and finessed prompt-to-text generation advances, it won’t be long before any movie or TV show, still image, painting, or novel created in the centuries of B-AI history is viewed as an outdated artifact.
More than that, the ability of AI to simulate anything could call into question how any and every work of art to date was crafted.
Even “behind the scenes” footage of humans actually crafting a film on set could be called into question. It could be faked, right?
That’s a pretty soul-destroying thought, but let’s have faith that we record and hand down the history of creation so that future generations appreciate the sweat, skills and inspiration and collaboration it took to make, say, Singin’ In The Rain or Raiders of the Lost Ark.
Hamper, however, also points out that our trust in what we see on screen has always been one of suspending disbelief. If someone is shot dead in a TV drama, we already know the actor wasn’t killed in real life.
“From reality TV narratives, to film lighting to special effects snow, you accept it. It’s all just been sorcery happening behind a screen. We have become fully locked into this fake reality,” he says.
“But at least it is human fakery,” Hamper adds, concerned that now even the skills with which humans used tools to “fake” things on screen will be completely taken over by machines.
Then he flips his own argument on its head. He believes (hopes?) that humans will still be essential for the creative process, “at least for now.”
“The one thing I encourage all creatives to think about is not to how to cut ahead of the curve of AI, not how to monetize it or clamber on the bandwagon, but to stop and think how these tools can help tell stories that have not been possible.”
The death of trust “may not be a bad thing,” he says if we can use AI to conjure stories that helps humanity connect with one other and the world around us in ways that have not been possible before.”
Before we call time on the content creation ecosystem, let’s take another perspective. The stock footage industry, for instance, is reckoned by almost every pundit to be virtually wiped out, and soon.
This sector of the industry was predicted to be valued at $7 billion by 2027, according to research firm Arrington. That was in 2022. Since then, market leader Shutterstock has partnered with an AI developer to grow its image library with AI stills and video.
“The underlying business model of an industry that was supposed to near $8 billion in just a few years, is essentially wiped out in the medium term,” says a review of AI’s impact by Synapse.
Think again.
“The idea of going to these sites and purchasing 10 seconds of footage will fade. But high quality data is the only way OpenAI or any competitor will be able to create a usable model. It essentially shifts every B2C stock site to a B2B video supplier. OpenAI may also enlist an army of stock filmmakers to collect certain scenarios that are missing from the model.”
What about VFX? Surely another industry that will be upended by AI. Won’t the $400 billion animation industry dominated by like Disney and Netflix “see massive disruption as the technological moat drops significantly?”
Maybe. Or maybe the money that went to a few (studios) will now be shifted. It stands to reason that one group to gain will be those supplying the underlying tech, thinks Synapse. Not necessarily the AI tool developers, but the makers of computer processors required to power the data crunch. (Could NIVIDIA CEO Jen-Hsun Huang become the richest man on the planet?)
The rest of the pie could go to creators hitherto largely cut out of the greatest rewards.
“The industry risks being over reliant on AI video models to serve their customers by making [content] more similar to wrappers than the foundations that help builders create” says Synapse.
“Think of it this way. Rather than an entire team of animators, VFX, lighting specialists and more, an individual with a story to share, we’ll be able to create and distribute a story at high speed and efficient cost. Creation of new worlds in the gaming and VR space will be streamlined and available to the individual creator.”
Others also see this upside in the evisceration of the traditional content creation industry model.
Chris Wells is a content marketer, but his words appear on behalf of Lightworks, the editing system favored by Thelma Schoonmaker, among others.
In an essay written for the Lightworks blog, he endorses the optimistic outcome of AI even as it destroys jobs. Think of it as a phoenix from the flames.
“Aspiring filmmakers will no longer need expensive equipment and large teams to bring their ideas to life. Instead, all that will be needed is an internet connection and an idea to manifest all the rich, cinematic scenes one’s future auteur heart could desire.”
It’s a good thing, if you follow this line of thinking.
“Directors will be able to rapidly turn their visions into footage, learning from results and refining iterative drafts in a fast feedback loop previously impossible in such a visual medium. Entire short films could be brainstormed, drafted, revised, and finalized in days rather than months or years,” Wells continues.
“Filmmakers will also gain the flexibility to experiment with a wide range of styles and narrative directions, unencumbered by the practical constraints of traditional filmmaking. By streamlining the technical aspects of production through AI, Sora will liberate creators to focus purely on their directorial craft.”
What’s more, he contends, with a tool as powerful as AI in the hands of anybody, previous barriers for women, people of color, or disability will fall away. Who could possibly argue with that utopia?
“These instant video creation capabilities could place indie artists and major studios on equal footing like never before,” Wells writes. “Aspiring directors might no longer need to struggle to raise funds or await permission for the ‘right’ location. Their visions could spring to life at their fingertips. Lowering the barriers of entry through technology may lead to an exponential growth of new filmmaking talent from underrepresented communities.
“By making professional filmmaking radically accessible, Sora has the potential to promote empowerment and self-actualization for all.”
You can’t argue with its statement: “Whether we like it or not, we are forcibly standing on the precipice of a new era in technological innovation,” but you might take issue with the hope — for that’s what it is — that humans remain at the center of the creative process.
Lightworks wants to preserve “the human element in the AI Age,” says Wells.
“While Sora promises creators radical new capabilities for magical instantaneous video generation, the essence of videomaking remains profoundly human.”
Perhaps resistance is futile. While AI pushes the boundaries for experimenting with stylistic techniques once deemed practically impossible, “filmmakers must lead in establishing best practices for AI tools to expand creative possibility without overtaking human artistry or ethics.”
Attention is turning to how generative AI will be not just used in production but how it will transform every aspect of storytelling.
March 11, 2024
NAB Show Amplified: Generative AI’s Impact on Content Production
Captivating narratives, stunning visuals and life-like character interaction: Artificial intelligence (AI) is disrupting traditional content creation and delivery.
Dr. Li is the CEO and co-founder of Pinscreen, a Los Angeles-based startup that builds advanced AI-driven virtual avatars, as well as associate professor of computer vision at Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi.
In his presentation, Dr. Li will peel back the layers of generative AI for production – including the latest advancements in AI lip sync technology, face swap and de-aging, as well as the future potential of AI technology.
Dr. Li will present “Generative AI for Content Production: From Storytelling to Visual Effects, AI Lip Sync, and Beyond” on Saturday, April 13 at 10 a.m. at the NAB BEIT Conference Opening Session. Register here to attend.
“Dr. Li’s insights into how AI is shaping the future of content delivery will be invaluable to conference attendees,” says John Clark, senior vice president, NAB Emerging Technology and executive director, PILOT.
“This high-energy conference kickoff lays the groundwork for over 70 presentations and panels that will explore the rapidly changing media technology landscape.”
The NAB BEIT Conference will focus on the future of content delivery, next-generation systems and the opportunities and challenges at hand throughout its 70+ sessions. The conference’s forward-looking focus is designed for broadcast engineers and technicians, media technology managers, broadcast equipment manufacturers and R&D engineers. Key sessions include:
ATSC 3.0 Topics: Presentations include a look at challenges, new technology and research and implementation and differences of NextGen TV.
Cybersecurity for Broadcasters: Explore strategies for protecting media assets and maintaining secure connectivity in live distributed production, discuss the convergence of artificial intelligence, cybersecurity and broadcasting as technological milestones and take a look at the fight against piracy.
Generative AI for Media: These presentations include recent advancements in transcription, translation and re-voicing, their ethical implications for media editing, the application of generative AI to leagues and media organizations and using AI to manage title versions and achieve global distribution requirements.
Emerging Technologies in Media Delivery: Delve into boosting multilingual broadcasting with AI/ML, monitoring enhancement strategies for elevated global content integrity and examining how private 5G networks challenge traditional Wi-Fi and public 5G for video streaming and wireless connectivity dominance.
Panels on AI in Media: These sessions look at the groundwork being laid to enable AI to perform basic interoperability between services and vendors, as well as how AI can offer solutions to multilingual broadcasting.
Radio Topics covering Visual Content, Remote Audio Operations and the Air Chain: These broadcast radio-focused presentations discuss the adoption of enhanced screens in cars increasing listener engagement and new revenue opportunities, exploring how latency can be divided into layers in light of the significant delays brought by remote and cloud production, and the challenges with AM and FM radio that impact not only audio quality but PPM encodability.
Navigating the Creator Economy: AI Video Generators for Social Content
TL;DR
AI tools designed to improve and speed your marketing communications are legion. Here are a few of the latest video generators powered by AI.
Users can create videos for a wide variety of use cases with these tools. This includes generating educational videos, explainers, product demos and social media content.
These tools all work in similar ways, so it’s a question of try before you buy (or before you publish).
Video content is a must have for businesses and content creators wanting to compete. At the same time, it has never been more accessible to create professional looking video content by using AI to do most of the work. This article lists some of the more popular AI video generators targeting marketers or anyone in business looking to create everything from HR training videos to YouTube clips, highlight reels, voiceovers or targeted marketing content to be published online and social networks, as YouTube videos, Tiktok Reels or video ads.
They pretty much all share some common denominators. They don’t require much if any prior experience of editing or design. Many are browser-based, meaning they can accessed from anywhere via an app. Most work with just a few clicks requiring the upload of some raw content (a blog post for example) and user choice of voice and “avatar” to personalize the video. The output of shortform content complete with background music, graphics and templates is done in a few minutes.
However, as SproutVideo’s Conner Carey points out, the videos they generate leave significant room for improvement. “These shortcomings make them most effective at creating faceless videos with voiceovers, such as for FAQs and blog post summaries,” he notes.
Currently, all of these tools produce about the same quality of AI-generated video. But what differentiates the good from the bad (and the ugly) is how easy the platform allows you to edit the video, adding your footage, scenes, music, and more.
Most reviewers advise trying a few (most offer free trials) to ascertain ease of use and results.
Here’s a pick of 10 AI video generators for marketers available to use today, leaning on the selections of Alex McFarland at Unite.ai.
With Pictory you start by providing a script or article, which will serve as the base for video content. For example, turn your blog post into a video to be used for social media or your website.
“This is a great feature for personal bloggers and companies looking to increase engagement and quality,” says McFarland. “It’s simple to use and takes just minutes before delivering professional results that help you grow your audience and build your brand.”
Another feature of Pictory, for those looking to create trailers or share short clips on social media, is that you can create shareable highlight reels. And you can also automatically caption and summarize videos.
Synthesys relies on text-to-video technology to transform scripts into dynamic media presentations. Creators and companies can use Synthesys to create videos with lip-syncing AI video technology. All you have to do is choose an avatar and type your script in one of 140+ languages, and the tool will do the rest.
The software offers 69 real “Humatars,” and a voicebank of 254 unique styles. It also offers full customization, an “easy-to-use” interface for editing and rendering, and high-resolution output. Again, it is aimed at marketers or creators wanting to generate explainer videos and product tutorials in minutes.
But it’s not to be confused with Synthesia, another platform targeting brands that also enables users to quickly create videos with one of 70 AI avatars. Besides the preset avatars, you can also create your own. Synthesia claims to be used by some of the world’s biggest names like Google, Nike, Reuters and BBC.
McFarland notes that Synthesia’s AI voice generation platform “makes it easy to get consistent and professional voiceovers, which can be easily edited with the click of a button.” These voiceovers also include closed captions. Once you have an avatar and voiceover, you can produce quality videos in minutes with more than 50 pre-designed templates.
If you’re looking for a more powerful AI to generate marketing and explainer videos, InVideo might be the one. It doesn’t require any background in video creation or video editing, either. All you have to do is input your text, select the best template or customize your own, and download the finished video. The video content can then be shared directly to social media. InVideo says its users develop promo videos, presentations, video testimonials and slideshows.
HeyGen claims to make video creation “as easy as making PowerPoints.” Once again, the process is to record and upload your real voice to create a personalized avatar, or simply type in the text that you want. There’s a wide range of voices with more than 300+ to choose from. There are multiple customizations available including combining several scenes into one video and, of course, adding music that matches the theme of the video.
The Deepbrain AI tool offers the ability to create AI-generated videos using basic text. Simply prepare your script and use the text-to-speech feature to receive your first AI video in five minutes or less.
VEED also makes it easy to transcribe your video files in one click. All you have to do is upload your video, click “Auto Transcribe,” and download the transcript. With its free video editing app, you can work on creating content right in your browser.
Fliki apparently makes creating videos as simple as writing with its script-based editor. To McFarlane, Fliki stands out from other tools because it combines text-to-video AI and text-to-speech AI capabilities to give an all-in-one platform for content creation. It features more than 2,000 text-to-speech voices across 75+ languages.
The Colossyan video generator enables users to choose from a diverse range of avatars and provide the avatar with a script. After your first video is generated, you can then target different regions by auto-translating your whole video with the touch of a button. You can easily change accents and clothing and choose from upwards of 120 languages.
Elai.io users generate video from the link to an article or a blogpost in just three clicks. You first copy and paste a blog post URL or HTML text before choosing one of the templates from the library. All that’s left to do is review the video, make any changes, and render and download it. There are over 60 languages available and more than 25 avatars to choose from. Besides selecting a presenter from the library, you can also request a personal avatar.
Biteable is an AI video assistant and in-browser editing suite that helps create simple, templated videos from script to edit with just one prompt. You choose the video type (explainer, promo, how-to, etc.), the format (landscape, vertical or square), and the visual style from a variety of options and, of course, enter a descriptive prompt. It generates a slideshow-style video complete with AI-generated script, stock video, images, and royalty-free background music.
While it may not win any awards for cinematography, “it’s incredibly useful for creating quick social videos or promoting product updates, rates Vidyard.
Vidyo.ai uses AI to create Reels, TikToks, and YouTube Shorts from long-form video content. Once you upload a video or insert a URL, the platform takes a few minutes to produce a handful of potential short video content with captions. Munch is a comparable tool spotted by Vidyard, “albeit with slightly less impressive results. However, some users may find the customization functionalities easier to use.
A room-size computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the US Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.
The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past — and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.
The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.
Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.
It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language — a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.
But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.
The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.
Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.
Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”
But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.
For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm — unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.
Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.
Lessons to Heed
In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.
The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.
A new report from SMPTE provides background to media professionals on how artificial intelligence and machine learning are being used for production, distribution and consumption.
The report explores ethical implications around AI systems and considers the need for datasets to help facilitate research and development of future media-related applications.
“Bias is the model-killer,” SMPTE contends. “Black box algorithms help no one. Intellectual and cultural diversity is critical to high performance. Product teams must broaden their ecosystem view.”
SMPTE has called on the Media & Entertainment industry to be more active and vocal in the in debate about developing ethical AI systems. Doing nothing, or not doing enough, is not an option because “failure may come at a high human cost,” the organization says.
“The time to discuss ethical considerations in AI is now, while the field is still nascent, teams are being built, products roadmapped, and decisions finalized. AI development is no longer just a technical issue, it is increasingly becoming a risk factor.”
This call for action forms a substantial part of the “SMPTE Engineering Report: Artificial Intelligence and Media,” which was produced alongside the European Broadcasting Union (EBU) and the Entertainment Technology Center (ETC). The report was the result of a task force on AI standards in media that began in 2020. Since then, it has become clearer to everyone that AI will transform the media industry from pre-production through distribution and consumption.
“I believe that AI will continue to see exponential growth and adoption throughout 2024,” said SMPTE President Renard T. Jenkins. “Therefore, it is imperative that we examine the overall impact that this technology can have in our industry. That is why the progressive thought leadership presented in this document is so important for us all.”
The report begins with a technical understanding of AI and machine learning, followed by the impact these technologies will likely have on the media landscape. The report then moves on to examine AI ethics and ends by discussing the role that standards can play in the technologies future.
The report describes today’s AI as “disruptive, vague, complex and experimental” — all at once. “It is difficult to understand, and easy to load up with fears and fantasies,” the report reads.
“This is a dangerous combination. The convergence of corporate hype, fledgling methods, biased datasets, and the urgency to productize, are all fertile grounds for failure,” it continues.
“Learning through failure is generally good way of testing and improving tentative tech like AI — except when models are put in a position to make decisions about policing, hiring, synthetic conversations, or even content recommendation and personalization.
“Then, failure may come at a high human cost.”
Organizations must examine the downside risk of deploying underperforming and unethical AI systems, especially because, in most cases, ethical and technical requirements are the same.
“For example, unseen bias is as bad for model performance as it isdiscriminatory. Model transparency is not just an ethical consideration: it is a trust-building instrument.”
SMPTE urges the M&E industry to bring its own voice “and nearly 150 years of success marrying human and technological genius” to the debate.
“Media holds a substantial and powerful place in our society as the mass distributor of human narratives and social norms. Media must bring this unique voice and hybrid human/machine culture to AI development and the debate on AI ethics.”
The report explains how Media & Entertainment companies collect and process large amounts of consumer data and that increasingly, this means they must comply with a growing list of legal regimes and data governance requirements. Similarly, there’s a substantial opportunity to use computer vision in virtual production and post-production processes.
SMPTE suggests that the quality and diversity of training sets — “how color correction can affect representation of minorities” — and the use of deepfake technology are “critical areas” where ethical considerations are paramount.
The media industry’s history of sophisticated legal practice around likeness rights, royalties, residuals, and participations is a “substantial advantage in navigating issues related to computational derivatives of image and content,” it writes.
The paper argues for a standards-based approach to verification and identification, and not only of the image (e.g., format and technical metadata), but also of the talent itself and the authenticity of content.
“Persistent, interoperable, and unique identifiers have aided media supply-chains in the past, and could well help with the labeling and automating the provenance of authentic talent in the future age of AI in M&E,” it states. Such work is ongoing, including at the Coalition for Content Provenance and Authenticity (C2PA).
“At a minimum, requirements for data and model transparency would go a long way towards reinforcing trust in computational methods and help convert those in the industry still reluctant to use statistical learning to optimize human processes.”
Around the corner, the development of conversational agents (chatbots)creates serious ethical risks, especially as the industry looks to create highly immersive and personalized experiences in the multiverse.
“Bias is the model-killer,” SMPTE contends. “Black box algorithms help no one. Intellectual and cultural diversity is critical to high performance. Product teams must broaden their ecosystem view.”
There’s a call for ethical considerations to be embedded in all aspects of digital product design, and development. This seeding of ethics at the product level is essential to view bias as a complex ecosystem of inputs, features, models, outputs and outcomes, it says.
“Any organization’s output, products, and decisions (deliberate or not) inherently fit its culture and values. This is why AI ethics is high-stakes: it deploys an organization’s culture and values on a large scale,” the report argues.
“Because they shape society at scale and have a history of taking the public interest seriously, media companies have a distinct responsibility to move forward with their AI ambitions, with full awareness of these applications’ ethical considerations. They should ensure that all aspects of their development (including data collection), deployment, and end-uses, support the law as well as their own values regarding privacy, justice, tolerance, and human rights.”
The AI Ethics Pipeline
The entire value chain of AI development, from product design to data collection to model deployment, should be secure, transparent, explainable and auditable, says SMPTE.
In contrast, black box machine learning frameworks are “ethically and statistically dicey. They foster sloppiness in data science teams and mistrust for those already suspicious of machine models. What cannot be explained should not be deployed in a decision-making environment.”
The report continues: “In a world where organizations are often too suspicious or too enthusiastic, only secure, transparent, explainable, and auditable machine models can scale resiliently. Additionally, all stakeholders deserve transparency, each in their own language, across different points of view and technical sophistication.”
Ethics, it says, should be part of Quality Assurance for any and all computational systems.
“AI is still a technical ungoverned frontier. Everything around it, from roadmapping to modeling to seeding in company culture, is complex and challenging. Mistakes will happen. Organizations must communicate comprehensively and with humility about their journey to approach and implement processes around ethical AI, for the benefit of all.”
With technical standardization of AI still in its infancy there’s an imperative on the Media industry to provide language and frameworks to support its development, SMPTE urges.
“AI is an emerging technology, and AI ethics is an almost entirely blank slate. Examples of successful, organization-wide implementation of ML transparency and trustworthiness are extremely rare.”
But this should be motivation to try harder, SMPTE says. “Transparency is not just key: it is a perennial concern.”
The report warns of “model drift” where as the world changes, the problem changes, the data changes, and model performance is affected. “There is no longer a fit between the model and the system, or behavior that it is representing. Only transparent and auditable models can catch model drift before it causes damage.”
AI has a visual plagiarism problem, raising legal challenges and the urgent need for industry collaboration in ethical AI development.
March 10, 2024
Deloitte: How Brands Can Better Connect With Creators
TL;DR
The creator economy represents a $250 billion revenue opportunity today, but brands need to adopt a nuanced approach to engaging creators to their cause.
Understanding a creator’s long-term goals, and helping them reach those goals, is not just beneficial for the creator, but also can drive future value for the brand.
A new Deloitte survey explains what differentiates content creators from influencers and what drives consumer trust in creator-brand endorsements.
The creator economy could approach half-a-trillion dollars by 2027, according to Goldman Sachs, but many brands could fall short of tapping into its full potential.
Deloitte, in its “Creator Economy in 3D” report, says that’s because brands are approaching the creator economy with influencer marketing strategies rather that ones tailored to creators. Deloitte’s study surveyed more than 2,000 consumers, more than 500 creators, and 500 brands.
“While influencers are adept at providing brand exposure across a wide assortment of audiences — content creators often help brands penetrate deeper into niche communities and bridge a more personal connection with their audience,” it states.
The management consultancy outlines how brands use “creator” and “influencer” interchangeably — and with reason: Both produce content, act primarily on social media channels, and seek to monetize their work.
“But content creators tend to be better suited for specific objectives, have a deeper depth of audience penetration, and appeal to consumers on a different level than influencers.”
The report outlines these differences in detail: For instance, content creators are said to drive deeper relevance within niche communities and sub-communities whereas influencers tend to drive broader relevance across diverse range of audiences.
To get the most out of their creator partnerships, Deloitte advises brand managers to understand the underlying drivers behind the consumer-creator bond and how this translates to brand trust.
For instance, its survey found certain creators with more influence than others The average consumer has five favored content creators — which Deloitte defines as creators that consumers actively seek out for new updates and content, as opposed to creators that consumers engage passively on their feed.
“These creators are the social media equivalent of a favorite TV show, perhaps with less regular schedules. This trend also appears to be increasing with each generation, with the median Gen Z social media user surveyed having 10 of these favored creators.”
Recognizing, supporting, and facilitating the creator is also a key element in strengthening a creator-brand partnership. Brands are encouraged to think of it “somewhat like a business-to-business partnership.” In other words, brands succeed when their creator partners succeed.
Most brands surveyed provide creators with assistance on the creative process, content development, networking, and brand management. Some offer financial and wealth management tools and services to their creator partners.
Paying creators “competitive, fair, and equitable market value,” and paying them on time, are also emphasized.
“Understanding a creator’s long-term goals, and helping them reach those goals, is not just beneficial for the creator, but also can drive future value for the brand.”
Deloitte concludes: The creator economy operates effectively in how it reflects and shapes culture — which fundamentally runs based on the networks of shared values, experiences, and interests of groups of people. Knowing this, successful brands acknowledge the need to build networks of multiple creators to maximize their reach and impact across the various niches within a particular audience.