From Cloud-First to Virtual Production: Amazon Studios’ Next-Generation Approach
Jennifer Wolfe
Content Partner
NAB Amplify
Watch the full NAB Show 2023 session “Amazon Studios: Building a Next Generation Studio” above.
TL;DR
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of Amazon Studios execs to share their insights into constructing the next-generation studio.
In a session moderated by Jessica Fernandez, head of tech & security communications at Amazon Studios, panelists included head of technology workflow strategy Christina Aguilera, worldwide head of visual effects Chris Del Conte, and head of product strategy Eric Iverson.
From the adoption of a cloud-first approach to the extensive use of in-camera VFX, the panelists highlighted how Amazon Studios is redefining the entertainment studio model.
Espionage thriller “All the Old Knives,” and Amazon Prime series “Solos” and “The Lord of the Rings: The Rings of Power” served as key examples of Amazon’s next-generation studio approach.
Imagine having sunset for nine hours a day on a film set, or creating a bustling crowd scene in the midst of a pandemic. These aren’t scenes from a sci-fi movie, but real-life examples of how Amazon Studios is revolutionizing the production ecosystem.
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of key technology leaders from Amazon Studios to share their insights into the groundbreaking strategies and technologies they’re leveraging to build the next-generation production studio. The panel discussion, “Amazon Studios: Building a Next Generation Studio“ was moderated by Jessica Fernandez, head of technology & security communications at Amazon Studios, and featured head of technology workflow strategy Christina Aguilera, worldwide head of visual effects Chris Del Conte, and head of product strategy Eric Iverson.
L-R: Jessica Fernandez, Christina Aguilera, Chris Del Conte and Eric Iverson.
From the adoption of a cloud-first approach to the extensive use of in-camera VFX, the panelists highlighted how Amazon Studios is redefining the entertainment studio model and leading the way in the new era of film and TV production. The discussion centered around Amazon Studios’ pioneering use of a fully AWS-powered cloud infrastructure, its innovative virtual production facility, Stage 15, and the company’s commitment to sustainability and diversity, equity and inclusion. Watch the full NAB Show session in the video at the top.
Espionage thriller All the Old Knives, starring Chris Pine and Thandie Newton, served as a prime example of Amazon’s next-generation studio approach. The lead characters in the film frequently meet up in an oceanfront restaurant at different times of day. The production team initially considered shooting on location, but since they were filming in London, capturing authentic sunset views was challenging.
To solve this problem, they turned to virtual production. They shot plates of a sunset and projected these onto an LED wall that was placed outside the window of the set on a stage. This allowed them to control the lighting and weather conditions, effectively giving them “sunset for nine hours a day,” said Del Conte. This approach also offered significant efficiencies and sustainability benefits, as they didn’t have to fly the crew out to a beach location or chase the sun to capture the perfect shot.
The use of virtual production was so successful that Variety, in its review of the film, complimented the beautiful sunset scenes, not realizing that they were digitally created, Del Conte recounted. “Variety got fooled,” he said. “So, at the end of the day, [virtual production is] the right kind of tool to be using for these kind of conditions. You don’t have magic hour, you have magic day.”
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
Aguilera highlighted the production of the Amazon Prime series The Lord of the Rings: The Rings of Power as her favorite example of the studio’s next-gen approach. She emphasized the studio’s proactive adoption of a cloud-first strategy, which ensured a seamless data flow from camera to final creative output. This strategy proved invaluable when the COVID-19 pandemic struck, allowing the production to continue unabated while much of the world came to a standstill. The experience underscored the critical importance of a cloud-first approach and system interoperability in today’s production ecosystem.
“The fact that we had the cloud-first approach, the interoperability between the systems, the data flow straight from the camera, all the way through final creative, and all of these concepts, you know, they took work,” she said. “But When COVID hit, [the production team] didn’t skip a beat. They didn’t have to stop. The rest of the world stopped. They kept going. So that was pretty amazing, the fact that we were able to be proactive and be in a position to keep moving forward.”
Amazon Prime anthology series Solos was another example of the studio’s innovative approach, Del Conte said, describing an episode featuring Helen Mirren. “The entire scene was her inside the space pod the entire time, all white interior bubble windows. She had a red reflective leather space suit on and she has whitish hair.”
The initial plan was to use green screen outside of the windows, but this approach would have resulted in green screen spill, changing the color of the pod interior and Mirren’s hair. The bubble windows themselves also presented challenges and, recognizing these issues, Del Conte proposed virtual production as the solution.
He recalled a gratifying moment towards the end of the shoot when a member of the post-production team thanked him for his suggestion, saying, “Not only did we save time and money, but we’re also able to start testing this episode in two weeks,” as opposed going through iterative processes of VFX shots and management.
Del Conte emphasized that this approach was not only more efficient and cost-effective but also resulted in in-camera final effects shots ready for testing. “This was really the only way to do this kind of shoot, and [resulted in] a better creative experience.”
If you want to make money as a content creator you’ll need to think and work like a business. Not only does it take five months to earn your first dollar, and just over a year to begin working full time as a creator, you’ll most likely need upwards of $10,000 in the bank to support yourself before the dimes roll in.
“It’s a business, not a freelance gig, and it requires a business approach to revenue generation, management, operations, etc. — even at a small scale of one — to be successful.”
That’s according to new research into the creator economy from The Tilt. Specifically, its research asked creators themselves what it actually took to do their jobs.
“A content enterprise is not a get-rich-quick scheme… it’s not even an ever-get-rich scheme for most,” the report’s authors say.
Even if creators go full time on their content business there’s no bonanza in earnings. The average full-time creator earned $86,000 in 2022. On average, full-time creators expect to bring in approximately $108,199 in revenue in 2023 and will pay themselves $62,224 — a gross margin of 59%.
Most creators (86% in this report) say they think of themselves as entrepreneurs where non-creative tasks take up nearly half of their time.
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Content entrepreneurs spend a little less than half of their week on creative efforts. The rest of the time, they’re knee-deep in operations, marketing and sales, content distribution, and other unglamorous tasks.
As one creator reports, “I spend most of my time managing people, doing accounting, talking to sponsors, managing editorial calendars, fixing equipment, etc. People think being a creative is all puppies and rainbows — and no one wants to hear you complain.”
The biggest challenge experienced by 64% of respondents was to grow their online audience.
Tilt has some advice: The more niche the audience and the narrower the topic, the better the odds for success. Build relationships with your audience by responding to comments, asking for feedback, and creating more of the types of content they respond to, it suggests.
Focus on building connections with your audience and other creators. Someone following you is the start of a relationship, not the end result.
“It’s not actually about how good your content is, it’s about how you leverage it and monetize it. And that mostly comes down to marketing and publicity. The so-called ‘best’ creators are often those who are just best at doing their own marketing.”
The 2023 NAB Show session, “The Independent Age in the Creator Economy,” offered expert insights into brand partnerships and more.
May 7, 2023
The AI Solutions Introduced at NAB Show
Adobe Firefly in action
Artificial intelligence generated a lot of excitement this spring. Its influence spread from Silicon Valley to Hollywood, with a confluence of technologies introduced in Las Vegas at this year’s NAB Show.
Generative AI, in particular, caught the attention of content creators and consumers, manufacturers and media companies. Its promises range from improved efficiency to truer independence for creatives. In Q2 2023, we’re seeing workflows and products shaped by the potential of neural networks, while creatives are starting to take advantage of AI tools.
However, that excitement has come paired with several flashing ⚠️ caution ⚠️ signs. The ethics of AI (and we’re not just talking about deepfakes), concerns about inadequate regulatory frameworks, and ever-present worries about how machines may change the job market persist.
But one expert, ETC’s Yves Bergquist, tells us to consider the source when listening to these worries: “Are they accountable to their audience, or to their code?”
That is to say, what do those who are actually working on the algorithms and data sets and workflows think — and contrast that information with context from those who might benefit from controversy and hyperbole.
Another generative AI proponent and creative practitioner, Pinar Seyhan Demirdag, thinks these tools will ultimately find their footing in use cases determined by the four Ds: tasks that are dull, dirty, dangerous, or dear for humans to perform. (Rotoscoping comes to mind as an example of dull.)
Coming soon: Demirdag joined Bergquist at NAB Show to discuss generative AI workflows, as well as their thoughts for how this tech may shape the industry’s near-term future. Keep an eye out; that recording will be available on demand.
Read on: Adrian Pennington also explores a lot of AI myths in his coverage for NAB Amplify. His breakdown of What Ifs vs. What Ares for generative AI is especially clarifying (as of April 2023, anyway).
You’re most likely familiar with OpenAI’s ChatGPT and may even be a paying subscriber for GPT-4. DALL-E and Midjourney may have been in your arsenal since last summer. Many other companies have long integrated forms of AI and machine learning into their offerings, but here are a few newer generative AI tools targeted at M&E creatives.
Adobe: Firefly
Adobe announced its AI art generator ahead of this year’s NAB Show. The beta version of Firefly is focused on text and images, but the company expects to integrate its features across its video, audio, animation and motion graphics design apps in the near future.
Colourlab.ai
Colourlab promises film-quality color-grading without the drudgery. It pitches this AI tool as an assistant who frees up creatives to have more time and focus on the unleashing of our imaginations and a return to kid-like experimentation. There’s also a pitch for simplifying remote collaboration and reduced file storage for versioning (two other trends that we know aren’t going anywhere for M&E).
DeepBrain AI
Are you interested in beginning a “virtual human journey”? That’s how DeepBrain AI describes its tools for creating 2D and 3D avatars and related text-to-video solutions.
Flawless
This startup’s tagline is “Hollywood 2.0,” and it promises “magical new tools and emerging technologies” tailored for filmmaking. What does it currently offer in its generative AI toolbox?
NVIDIA & Getty Images collab: NVIDIA Picasso
NVIDIA and Getty Images announced a collaboration in late March, in which NVIDIA Picasso’s generative AI model will be trained on Getty’s image library of fully licensed images (and Getty contributors will be credited and paid accordingly). You can learn more via NVIDIA CEO Jensen Huang’s GTC keynote here.
This New York-based startup offers more than 30 of its so-called “AI Magic Tools” (are you sensing a theme in how we think about AI?) that aid creatives in generating and editing images, audio, and video content.
You may have heard about Runway during the 2022 Awards Season; some of the VFX work for “Everything Everywhere All at Once” was crafted using Runway’s editing suite (specifically the rotoscoping work for that rock scene, IYKYK).
Seyhan Lee: Cuebric
This browser-based app promises the ability to move “from concept to camera in minutes” using Stable Diffusion image generation model. Forbes’ Tom Davenport shares a good overview of Cuebric.
Synthesia
Synthesia Studio is an AI video generation platform with a lens on the corporate video space. Its home page highlights the ability to transform PowerPoint presentations into training videos with avatars, as well as use cases for sales, onboarding and more. There’s less emphasis on creativity and more focus on efficiency and budgets.
This Week: Ad Pullback continues as YouTube revenue declines. New Gen AI breakthrough adds recursion and autonomy. A study debunks the middle class in the creator economy – but I see it differently (it’s all about the school of hard-knocks). Plus, updates on LinkedIn, TikTok, Wattpad, Bored Apes, and more. It’s the first week of May 2023 and here’s what you need to know NOW!
Ad Pullback Continues: Although Google had a good quarter, YouTube revenue continued to decline. Ad revenue was down 2.6% from last year’s 2nd quarter, albeit a better YOY decline than first quarter. It’s both structural and cyclical. Cyclical, as ad revenue has been broadly down (although Meta was up 3% YoY, with 24% higher Reels views which is a bright spot). But also structural as more viewers move to Shorts, which is likely cannibalizing longer-form views and revenue. Getting Shorts monetization right is critical, but so is blunting TikTok’s time-spent. I won’t rehash our Shorts updates from last week, but even as views are growing the clock is still ticking on unlocking Shorts revenue.
The latest Gen AI breakthrough: You need to know about Auto-GPT, which adds recursion, autonomy and self-prompting to ChatGPT. It’s well worth checking out. I could see a multi-step Auto-GPT process that evaluates and updates video clips for a daily news summary video or iterates headlines and thumbnails at light speed. Here are 5 early AutoGPT efforts to help you brainstorm potential creator economy applications.
New Study Debunks the Middle Class: Seems like everyone wants to release a study about the Creator Economy. This week’s newest entry is from Goldman Sachs, predicting that the global creator economy will grow to 500B by 2027. Goldman, alas, doesn’t see the emergence of a middle class, predicting that only the top 4% will make over $100k a year even as the pie grows. Simon Owens refutes this report – and Citi’s March study – and says a middle class indeed does exist. According to Owens we need to see creators as startup CEOs, and shouldn’t expect profitability for a while. I see it slightly differently – see below.
The Value of a Creator Education: Lots of noise around the unlikely career prospects for creators – particularly in this FT story sent to me by reader Don Anderson last week. I don’t necessarily agree with the article thesis though. Yes many (most) creators won’t find it a sustainable long-term career. But it’s not a hopeless endeavor. Any creator with over 50,000 followers has (like it or not) become the CEO of small – or not so small – direct to community business. There are many avenues to building sustainability even as your community growth stalls (see Kajabi, Orca, Gumroad and many others).
But even when the rainbows and unicorns fade, there’s value remaining. Becoming a creator isn’t a dead end. Stick at it, find some success (if fleeting), and it’s equivalent to a graduate degree in social video. Those skills you learned telling stories, optimizing thumbnails and headlines, planning and editing videos, scouring analytics, developing a community and building a support team are all eminently transferrable to the corporate world.
That TikTok MBA, YouTube MS or PhD in LinkedIn is your ticket to a lucrative career telling corporate stories on digital video platforms. Smart companies are already realizing they need a chief TikTok officer, YouTube Director or social video strategy expert.
And if you run a business, perhaps your next hire should come from the ranks of the creator world. They might be a bit raw after working on their own, but with mentorship and support she just might be the best hire you make in 2023.
Being a creator isn’t dead-end. It’s a hard-knock entrée into even bigger success – both for you and the lucky company that hires you.
Thanks for reading and see you around the internet. Send me a note with your feedback, or post in the comments! Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
FAST has evolved at speed in the US, proving a popular andprofitable way for rights holders to incrementally increaserevenue from existing library content. It is from this base thatthe industry isexpected to develop into a $10 billion+ industry with few signs that growth will be stunted any time soon.
For channel owners, discoverability has become the key focus and it is apparent both in the US but also in less mature markets, such as the UK. Whether in Europe or the US, cutting through on the EPG means providing a curated experience for the viewer with content that pops and demands attention.
Markets outside the US are expected to create a $2 billion revenue opportunity. Indeed, it is in countries outside of the US that growth is strongest,with FAST revenue surging by almost 50 times between 2019 and 2022.
With more than 1,500 FAST channels in the rapidly maturing US market, the challenge for services providers is one of discoverability.
The US will remain dominant in absolute revenue terms, but the fastest growth will come from countries outside of the US, as FAST’s international momentum gathers pace.
These are the key findings from a new report, “Move FAST or Get Left Behind,” into the spread of the Free Ad-Supported TV ecosystem by Television Business International.
There are now an array of FAST services in the US, with the majority offering between 200 and 350 channels: LG Channels, Roku and Paramount-owned Pluto TV all hover around the top end of the spectrum, but there are also numerous niche audiences catered to via services such as current affairs-related Haystack News, The Weather Group’s Local Now, and Sinclair Broadcasting-backed STIRR.
“Clearly, with so many channels such an array of topics now available, viewers are faced with vast choice while FAST channel owners are facing considerable challenges around discoverability,” says TBI Vision editor Richard Middleton, one of the report’s authors.
“Innovation is needed for discoverability of channels and providers need to try to work together with platforms on that,” says Bea Hegedus, global head of distribution at Vice, which operates FAST channels on Tubi and Samsung TV Plus. Hegedus, quoted in the report, says that FAST channel owners “that lack clear branding will need heavy investment to find and retain an audience.”
For the major players, TBI finds that the strategy of quantity is now shifting to quality. With FAST advertising revenue in the US expected to hit $7.3 billion in 2024, FAST services are looking to provide more premium fare and brand cut-through: WBD’s deal with Tubi, for example, will see the service launch 14 WB-branded FAST channels, as well as three curated FAST channels crossing reality, series and family.
“As platforms compete for viewers they will try and differentiate themselves from the competition by looking for exclusive content or an exclusive launch window for new content,” Bob McCourt, COO at Fremantle International, tells the report authors. “We are seeing this trend already, as some of the major studios are windowing more premium content into the FAST space, which is legitimizing its growing adoption as a free, cable replacement.”
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Models are also shifting, depending on the company and how it is approaching FAST. Per the report, Paramount increasingly sees Pluto TV, which it acquired in 2019 for $340 million, as a way to funnel viewers to its other streaming products, while the service itself carries numerous channels with Paramount content. However, most channels are striking multiple deals with FAST services to ensure “carriage.”
Models differ, but the US industry has tended towards an approach that sees channel owners selling a proportion of the ad inventory. Revenue share is also popular (often now around 50/50 between FAST service and the channel), while some rights holders may receive a fee for licensing a channel.
Aggregators are also buying up programming rights and curating their own channels, which are then put back into the FAST channel ecosystem, with each party receiving a fee.
McCourt adds, however, that a “critical mass” of channels is being reached. He also expects “increased allocation of advertising dollars by agencies to FAST,” and “more innovations in advertising, as platforms adopt brand integrations as well as traditional adverts.”
Shaun Keeble, VP of digital at Banijay Rights, which operates more than 20 FAST channels globally, is quoted by the report as saying the need for exclusivity of channels will heighten across all platforms.
“I expect there will be more personalization of EPG offerings and more sophisticated tailoring of content down the line, too,” he says. Keeble also believes that first and second window runs, and even original series, “will increase in number as commercial models adapt and the need for viewer retention becomes more vital than ever.”
The Rest of the World
Europe has lagged behind FAST uptake but it is now growing in popularity, with advertising revenues expected to hit $500 million this year and more than twice that by 2027. There are limiting factors in some markets, such as the UK, where FTA is more common, but the opportunity to more closely target specific viewers means ad growth is widely expected.
Much of that growth will come from a select few markets, with UK revenue expected to quadruple over the next four years to hit $506 million by 2027, but Germany is also expected to provide potential, with revenues that exceed $200 million within five years.
Marion Ranchet, founder of The Local Act, tells TBI, “It’s not easy to copy and paste the US winning formula. Each region is different when it comes to CTV penetration, advertising maturity and the like, all of which are key ingredients to FAST.
“In Europe, one major difference is the fact that free as a value proposition is nothing new. We have FTA broadcasters bringing us amazing content. Therefore, the immediate appeal of FAST in the US won’t win hearts as quickly here.”
And while English-language countries have dominated the FAST landscape to date, this could change: for example, Omdia has found that Tubi is particularly popular with US Hispanic audiences largely because it carries considerable amounts of Spanish-language content.
Executives in charge of four of the leading FAST services convened at NAB 2023 to give a full overview of the sector and where it is headed.
May 10, 2023
Posted
April 18, 2023
Despite Challenges, NAB Chief LeGeyt Says Next 100 Years Look Positive
NAB President and CEO Curtis LeGeyt (left) and KMEX Los Angeles anchor Gabriela Teissier at NAB Show’s welcome session on Monday.
In welcome address, CEO says broadcast TV, radio have a street-level edge in media ‘arms race’
BY Michael malone, broadcasting + cable
Watch the full NAB Show welcome session above.
NAB President and CEO Curtis LeGeyt sat for questions from Gabriela Teissier, anchor at Univision’s KMEX Los Angeles, during NAB Show’s welcome session. LeGeyt said the show’s 100th anniversary allows NAB to reflect on the past and sort out how to best help members in the future.
“The thread throughout all of it is innovation,” he said. “Innovation in service to our local communities.”
LeGeyt spoke of the “arms race” in terms of the modern media age, as the likes of Google, Amazon Prime Video, Spotify and other tech giants fight traditional media for users and revenue dollars. LeGeyt stressed that broadcast is unique in that it covers news at the street level. “We have the boots on the ground in local communities,” he said, “and that’s something no one else is doing.”
The pandemic was a reminder, LeGeyt said, of local broadcasters’ role in the communities it serves. Each subsequent severe weather event or other major local story is another reminder. “It’s very, very clear where broadcasters thrive — free, local, live,” he said.
ATSC 3.0 enhances local broadcast’s role in the community, LeGeyt added, noting how 60 markets now feature the standard also called NextGen TV.
Teissier asked about radio’s efforts to maintain its presence in automobiles. LeGeyt cited the “fierce competition for real estate on the dashboard” involving the newer media players. Making radio’s battle more difficult, he said, is that the industry has a vast array of owners, while the likes of Sirius XM and Spotify have but one.
“If we’re not all rowing in the same direction as an industry,” LeGeyt said, “we’re going to lose this arms race.”
Cable TV, podcasts and other digital media often offer users a forum among those who share their take on the world, LeGeyt said. Local broadcast, for its part, brings people with “different world views, different persuasions, different interests” together with unbiased content. Reporters in a given community know the best way to communicate with consumers in that region.
LeGeyt stressed that AM radio remains “very relevant,” reaching rural communities and serving as backbone of the Emergency Alert System.
Teissier asked about Spanish-language consumers, and LeGeyt mentioned that that demographic is most reliant on local broadcast. “They are far and away the most susceptible to disinformation on social media,” he said.
Local broadcast is “filling an enormous void left by the newspaper industry,” said LeGeyt, and has the support of the lawmakers he encounters in Washington.
Moments before FCC Chairwoman Jessica Rosenworcel took the stage, LeGeyt pressed for “modernized” ownership regulations, which better reflect the state of media in 2023.
Asked about NAB weighing in on the FCC’s handling of the proposed Standard General-Tegna merger, LeGeyt said NAB typically stays out of mergers and acquisitions. But he felt the “uncertainty” in the FCC’s regulatory process may get in the way of further investment in broadcasting. “Do some fundamental changes need to be made to that process?” he wondered.
Teissier also asked about AI, and LeGeyt said such technology can both help and potentially hurt broadcasters. AI can help stations better understand what’s going on in their communities and can free up reporters to be out in the field more. He’s also concerned about content creators being fairly compensated and the potential for mistakes in reporting rooted in AI.
“There are a lot of opportunities but I want to wave the caution flag,” he said.
The future looks bright for broadcast, the NAB chief said, but it will take hard work. “I couldn’t be more excited to lead that fight,” LeGeyt concluded.
The Library of American Broadcasting Foundation’s second annual Insight Award to CBS newsmagazine “60 Minutes” during Monday’s welcome session (from left): Session co-chair Heidi Raphael of the LABF; “60 Minutes” Executive Producer Bill Owens; session co-chair Jack N. Goodman and NAB President and CEO Curtis LeGeyt.
His words seemed to resonate with the audience. “I think that he’s got a great grasp of the issues, he communicates his point of view and NAB’s point of view, and he talks about broadcasters in the most favorable light — that we’re here to serve our local communities,” Phil Lombardo, CEO of Citadel Communications, said. “He makes that point time and time again and that’s what you want from the CEO of the National Association of Broadcasters.”
New Normal, Work/Life Balance: Remote Production Is Here to Stay
The benefits to news and sports include reduced equipment, personnel requirements
BY Peter SUCIU, tv technology
It was three years ago that the industry pivoted to accommodate COVID-19. The world has returned to normal for the most part, but broadcast production remains changed forever. The pandemic forced the industry to adapt and overcome, and in the process, it discovered that having teams work remotely had some advantages.
“Remote production is here to stay,” said Sasha Zivanovic, CEO of Nextologies, which operates a broadcast video delivery network specializing in broadcast-grade video connectivity. “The pandemic showed that you could practically get away with consumer equipment from home to facilitate what is normally produced in a studio or broadcast center.”
The sentiment was shared by Nick Ma, CEO and CTO of Magewell, which is showing its Ultra Encode AIO advanced live media encoder at the show.
“The benefits of remote production for news and sports — such as reduced equipment and personnel requirements at the event site, which in turn lowers production costs — are compelling,” said Ma. “It also allows valuable resources such as experienced staff to be maximized. For example, with remote production, a producer or director may be able to work on multiple events taking place in different cities on the same day, all from a centralized production center.”
However, bandwidth will continue to be an important consideration when sending multiple remote feeds concurrently from these distant locations and the reliability of the IP connection is critical.
“The ongoing rollout of 5G technology promises to bolster both of these aspects in wireless production environments, delivering lower latency and higher bandwidth that can be combined with cellular bonding to provide network redundancy across multiple carriers,” added Ma.
LESS TRAVEL, SMALLER TEAMS
Remote production has continued to result in smaller teams heading out on the road, even for major events — while the bulk of the staff can also be available to work from a broadcast center or even from home — reducing costs in the process. This will continue to present opportunities, as well as some challenges.
“We are traveling a lot more than just six months ago, but even before the pandemic our customers were seeing the benefits of remote production,” said David Edwards, product manager at Vislink. The company is demonstrating its 5G-Link, a cellular video and data communication device that enables bidirectional data communication between free-roaming wireless cameras and production centers.
“Remote production allows teams to be more efficient and provide for that work/life balance, and this makes it easier to recruit better as not everyone is ready to constantly be on the road all the time,” said Edwards. “We also see that cloud production is enabling teams to work together on multiple events without the need to travel to each one.”
In other cases, smaller — possibly even single-person — teams can do what used to require a truckload of gear.
“Thanks to efforts in miniaturization, most everything that is in a grip truck can practically fit in a backpack,” said Douglas Spotted Eagle, UAS instructor and director of educational programming at Sundance Media Group. “The heaviest kit is just 65 pounds, which can be important to note if you’re traveling by airplane.”
There are still times when not all aspects of a production can be handled remotely.
“A lot of what our team discusses in pre-production is how to get the most authentic content.
Much of my directing work is interview-driven docu-style, and I can tell you that, as many remote interviews as I have successfully conducted during and since the pandemic, there is nothing like speaking face–to-face,” said Amy DeLouise, FMC programming director and founder of #GalsNGear.
“During the pandemic, we pivoted to producing entirely virtual events, which meant bringing a lot of our clients up to speed on effectively what it means to produce hours of broadcast television. You can’t just have talking heads for a day-long conference.”
IP-enabled workflows for remote video production will allow broadcasters to cut costs and innovate new ways to create and consume content.
April 25, 2023
Posted
April 17, 2023
Immersive Technology Lands in the Spotlight Via the Metaverse
Growth to take off as virtual interfaces transition from tech toys into tech tools
BY Susan Ashworth, Tv technology
If you needed a specific definition for what the metaverse can do, you may be waiting awhile.
It’s an immersive embodiment of the internet. And it’s a shared, personalized experience. But it’s also an animated, interactive playroom, one that gives us the chance to experience our existence in ways we can’t in the physical world.
For broadcasters, game makers and content creators, the metaverse has the capacity to transform production of the industry.
By using affordable virtual reality technology, a media company might have the capacity to view, manage and interact with an ongoing production regardless of where it is happening. Or engage linear television viewers with an immersive, companion experience.
What’s clear is that the metaverse is poised to be something big.
“In general, it feels like the industry is on the doorstep of taking major strides toward delivering on truly immersive entertainment,” said Chris Brown, NAB executive vice president and managing director of Global Connections and Events.
GROWING MARKET
In its Tech Trends 2023 report, Deloitte Insights found that the metaverse is expected to be an $80 billion market by 2024 as companies begin to use the technology to create an enriched alternative to the flat, two-dimensional world we currently access via video feeds, email and texts.
“In other words, the metaverse is best thought of as a more immersive incarnation of the internet itself,” the authors of the Deloitte report wrote. “[It is an] ‘internet plus’ as opposed to ‘reality minus.’”
Growth is expected to take off as virtual interfaces transition from technology toys into technology tools, with new business models following closely behind.
In a recent panel discussion about the metaverse, Deloitte Consulting Principal Jessica Kosmowski said that industries are just now at the cusp of exploring unique initial use cases of the metaverse.
“We are essentially looking at the next evolution of the internet,” Kosmowski said. “Every aspect of the tech, media and telecom ecosystem is in for a major change in the next few years. Media companies will need to develop new business models [and] engage consumers with new content and experiences. Products and services will be reimagined at every layer of the technology stack.”
NAB Show is tackling the issue with a series of sessions, roundtable discussions and exhibitor displays. Leaders from Microsoft and Dentsu will take an in-depth dive into the metaverse and explore how companies have already begun to create destinations and experiences during the Tuesday session “Secrets of Building Your Brand in the Metaverse.”
Another Tuesday session, “East vs. West: How Will the Metaverse Evolve and Converge Globally,” will explore the commonalities and obstacles that exist between the Western entrepreneurial model and the Eastern centralized model and what businesses can expect when it comes to building within this new interconnected universe.
CREATIVE USES
What are the possibilities of all this? Consider scented packs that could be connected to a virtual reality headset to mirror the lush, scent-filled environment a user is watching on screen. Or a hyperreal augmented reality shopping experience led by an AI-powered avatar. Or the use of sensitive, interactive haptic gloves that would give a user a sense of touch.
There’s already demand for blending physical and virtual worlds in the media industry.
Sinclair Broadcast Group and Deloitte recently announced plans to launch a new metaverse sports fan community driven by a 3D creation tool. Beyond simply viewing a live game, fans can engage before the season and before each game. Sinclair called the partnership a key step in driving new revenue streams and deepening engagement with its viewers by redefining the sports viewing experience.
Almost universally, experts are saying that those interested in what the metaverse has to offer should start with strategy, whether the main goal is to develop new streams of revenue or to improve production operations through an augmented work experience.
On the show floor, exhibitors will spotlight their work in the metaverse and related experiences like Web3, AI and data-driven personalization.
“New immersive content experiences are imminent, from pure AR/VR or mixed reality variations to the full-blown promise of new digital worlds with users as the central character,” said NAB’s Brown.
While there are certainly hurdles ahead and the challenge of syncing all sides of the ecosystem — from the development of content, the process of content creation and distribution and the creation of the consumer technology necessary to deliver the ultimate user experience — the industry looks to be ready to take strides toward delivering deeply immersive entertainment, Brown said.
On The Main Stage: A Case Study — Color and Finishing in the Cloud
“The Lord of the Rings: The Rings of Power.” Cr: Amazon Prime Video
A Case Study: Color and Finishing in the Cloud Today | 12:45–1:30 p.m.
Jesse Kobayashi, VFX producer on The Lord of the Rings: The Rings of Power, will showcase how Blackmagic Design, Company 3 and AWS collaborated to create an entirely cloud-based infrastructure for conform, color-grading and delivery on one of the largest television shows in history and how these learnings and values are leading to new use cases and opportunities for productions across the industry.
Jesse Kobayashi
The session will detail this collaboration and explore how learnings and values from the production are leading to new use cases and opportunities for productions across the industry.
Kobayashi is a visual effects producer with more than two decades of experience in the industry. In addition to The Rings of Power for Amazon Studios, his credits in visual effects include Kong: Skull Island and Warcraft for Legendary Pictures and Krampus for Universal Pictures. Kobayashi has also served as director of visual effects at Legendary Pictures and as a post producer at both Warner Bros. and Laser Pacific.
The session is part of the P|PW, produced by Future Media Concepts. Being held on the Main Stage, the session is open to all.
NAB Show: Learn How the Cloud Workflow… Worked on ‟The Lord of the Rings: The Rings of Power”
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
TL;DR
“A Case Study: Color and Finishing in the Cloud” is scheduled for April 16 at 12:45 p.m. on the Main Stage.
“The Lord of the Rings: The Rings of Power” VFX Producer and modern filmmaking consultant Jesse Kobayashi will share insights from that production.
Blackmagic Design, Company 3 and AWS collaborated to create a customized infrastructure, which Kobayashi will describe.
“The Lord of the Rings: The Rings of Power” VFX Producer Jesse Kobayashi will head to the NAB Show Main Stage to discuss how the production created a cloud-based infrastructure for conform, color-grading and delivery.
On April 16 at 12:45 p.m., he’ll deliver “A Case Study: Color and Finishing in the Cloud,” detailing how Blackmagic Design, Company 3 and AWS collaborated on this project. Kobayashi will also share best practices and takeaways from “The Rings of Power” ways of working.
This keynote is billed as a free “bonus” Post|Production World session, and is open to all show attendees. (P|PW is produced by Future Media Conferences.)
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
Kobayashi has two decades of experience as a visual effects producer, with credits for “Kong: Skull Island” and “Warcraft” for Legendary Pictures and “Krampus” for Universal Pictures.
Kobayashi also works as a consultant and advocates for the adoption of new filmmaking technology.
Kobayashi has also served as director of visual effects at Legendary Pictures and as a post producer at both Warner Bros. and Laser Pacific.
He is a graduate of Azusa Pacific University, where he helped found its first film courses.
Containing nearly 10,000 VFX shots, post-production on the first season of “The Lord of the Rings: The Rings of Power” was enabled by AWS.
April 7, 2023
NAB Show: Generative AI, Bringing Together the “Why” and “How”
TL;DR
Generative AI (think ChatGPT and DALL●E) is poised to change the media and entertainment industry in myriad ways.
Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag will discuss how creatives can use generative AI tools to facilitate their work today at a NAB Show Create session on April 17 at 3 p.m.
A NAB show panel discussion aims to separate the hype from the “how” and “now” of generative AI for M&E.
This panel, featuring AI & Blockchain in Media Project Director Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag, will discuss how generative AI tools can help the media and entertainment industry in 2023, and consider how this technology might disrupt and augment workflows in 2024 and beyond.
Discover where Bard, Whisper, and Dall●E might fit into your creative process, and learn about other AI tools that could soon automate microworkflows at a desk near you.
A NAB Show Conference Pass is required for this session. Register here.
Speakers
Yves Bergquist is a data scientist and the director of the AI & Neuroscience in Media Project at USC’s Entertainment Technology Center, where his team helps the entertainment industry accelerate the deployment of next-generation analytics standards and solutions, including artificial intelligence.
He is also the CEO of AI engineering firm Novamente, which applies neural-symbolic artificial general intelligence to large enterprise problems. Novamente is the AI developer behind Hanson Robotics’ “Sophia.” His team also built the world’s very first fully autonomous AI-driven hedge fund, Aidyia, which is now defunct.
Before Novamente, Bergquist managed business development at analytics firms Bottlenose and Ranker in Los Angeles. He was part of the founding team at Singularity University, a joint venture between Google and NASA.
Pinar Seyhan Demirdag is an A.I. director, multidisciplinary creator, visionary, an outspoken advocate for the conscious use of technology, and an opinion leader in generative A.I.
In 2020, Demirdag and Gary Koepke founded Seyhan Lee, which has become the bridge between generative AI and the entertainment industry. Seyhan Lee created the first generative AI VFX for a feature film (“Descending the Mountain“) and the first brand-sponsored generative AI film (“Connections/Beko“).
In 2022, they announced Cuebric, a tool that combines several different AIs to streamline the production of 2.5-D environments for virtual production stages.
The panel will be moderated by NAB Amplify Senior Editor Emily M. Reigart.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
ChatGPT poses a fundamental question about how generative artificial intelligence tools will transform the workforce for all creative media.
April 6, 2023
“The Last of Us” Creative Team Takes the Stage at NAB Show
TL;DR
HBO’s adaptation of “The Last of Us” dystopian video game, starring Pedro Pascal as Joel and Bella Ramsey as Ellie, is a 2023 fan favorite and also hailed by critics for artful storytelling.
The panel will feature show creator and showrunner Craig Mazin, as well as Timothy Good, ACE, and Emily Mendez; Ksenia Sereda; Alex Wang; and Michael J. Benavente.
“The Last of Us” showrunner and creative team will discuss HBO’s small-screen adaptation of the hit video game on the Main Stage of the 2023 NAB Show.
The Sunday morning panel, presented by American Cinema Editors, will discuss the editing, cinematography, VFX, and sound artistry that brought Ellie and Joel to life.
Executive Producer Craig Mazin will be joined on stage by editors Timothy Good, ACE, and Emily Mendez; cinematographer Ksenia Sereda; VFX supervisor Alex Wang; and sound supervisor Michael J. Benavente.
The conversation will be moderated by The Hollywood Reporter’s Carolyn Giardina.
In addition to his role as executive producer, Mazin is also the multiple Emmy award-winning co-creator, writer and director of “The Last of Us.” Previously, he served as the creator, writer and executive producer of HBO limited series “Chernobyl,” for which he won Golden Globe, BAFTA, Writers Guild, Producers Guild and Peabody awards.
During “The Last of Us,” Timothy Good, ACE, was primarily responsible for editing the first season finale and the third episode, featuring the love story of Bill and Frank. In addition to editing a wide variety of TV series and miniseries, including ABC’s “When We Rise,” Netflix’s “The Umbrella Academy” and Fox’s “Fringe,” he has also worked on the original “Gossip Girl” on the CW and Fox’s “The O.C.”
Pedro Pascal and Bella Ramsey in “The Last of Us,” courtesy of HBO
Pedro Pascal and Bella Ramsey in “The Last of Us,” courtesy of HBO
While working on “The Last Of Us,” Emily Mendez rose from assistant editor to co-editor alongside Good for four episodes. She has also worked on the editorial teams for “The Umbrella Academy,” Fox’s “The Resident,” Hulu’s “Light as a Feather” and Fox’s “Rosewood.”
Sereda was listed as one of the “20 Cinematographers You Should Know at Cannes 2019” for her work on “Beanpole.” Before “The Last of Us,” she worked on films such as “Little Bird,” “Petersburg. A Selfie,” “House on Clauzewert’s Head” and “Acid.”
Wang is a 20-year veteran in the film and television industry. He has worked for VFX studios such as Digital Domain, DNEG and Industrial Light & Magic. Wang became a VFX supervisor for “Deadpool” in 2015 and counts “Jurassic World Dominion” as a recent project.
Benavente names Hulu’s “Under the Banner of Heaven” as one of his most recent projects. He sits on the Sound Branch Executive Committee of the Academy of Motion Picture Arts and Sciences.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
Award-winning multihyphenate Brett Goldstein will dive into his creative process in a fireside chat on April 17 at 4:00 p.m.
April 5, 2023
Posted
April 4, 2023
Step Into the Ring: Kramer Morgenthau’s Cinematography for “Creed III”
TL;DR
Even though the “Creed” movies are part of a expanded “Rocky” cinematic universe, this is the first in nine films that doesn’t have the original character as a part of the plot.
Director and star Michael B. Jordan collaborated with “Creed II” cinematographer Kramer Morgenthau to reinvent how boxing scenes are shot.
The filmmakers aimed for a heightened visual style influenced by Japanese anime, including what they called “Adonis vision,” a subjective POV from Adonis Creed as he’s clocking each fight.
“Creed III” was shot in IMAX format with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics.
Like the story arc of the majority of boxing movies, Creed III had a number of challenges to overcome in its production journey. Firstly, even though the Creed movies were part of a expanded Rocky cinematic universe, this was the first in nine films that didn’t have the original character as a part of the plot; Rocky had left the story.
Flipping this negative into the positive feel of a new start gave newbie director and still star Michael B. Jordan a chance to reinvent how to shoot the boxing scenes in particular. An easy reference, subconsciously or not, was Scorsese’s Raging Bull as its fighting is stylistically different from everything else.
Also, a new POV suited the storyline of a fight between a retired Adonis Creed and a significant person reappearing from his past with major issues to resolve.
Previous Creed II cinematographer Kramer Morgenthau and Jordan laid plans for a new “in ring” aesthetic as Max Weinstein explains in American Cinematographer. “Settling into his duties as a director, Jordan determined early in prep that he and Morgenthau would need to take two ‘big artistic swings’ to fully engross audiences in Donnie’s next chapter.”
The intention was to aim for a heightened visual style. “Michael is hugely influenced by Japanese anime — that’s completely his stamp on the movie. So, he brought that into the way we cover the fights,” Morgenthau says. “There’s this thing we call ‘Adonis Vision,’ where you’re seeing subjective point-of-view from Adonis as he’s clocking each fight, and that plays out in an anime style, with these hyper-real close-ups.”
For that, they switched to very wide-angle lenses, a 12mm Panavision H Series and a 14mm VA. “That again was part of Michael’s vision from the beginning. It’s very much an anime approach.”
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed, Mila Kent as Amara and Tessa Thompson as Bianca in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Tessa Thompson as Bianca and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Tessa Thompson as Bianca and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed and Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Tessa Thompson as Bianca in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors and director Michael B. Jordan on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed, Mila Kent as Amara and Tessa Thompson as Bianca in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Mila Kent as Amara, Tessa Thompson as Bianca and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
But the action in general had to seen from the inside, not the outside, which is the problem for most sports action movies.
The Panavision website described how the boxing was shot within reach of the fighters. On both Creed II and III, Morgenthau was joined in his corner by A-camera and Steadicam operator, Michael Heathcote. “Mike and I came in early during prep to work with [2nd-unit director and supervising stunt coordinator] Clayton Barber and [assistant stunt coordinator] Eric Brown to help design the moves for the fight choreography. There’s an arc to what happens in the fights and the stories happening in the corners and in the ringside seats. That was all carefully choreographed, like shooting a piece of dance.”
Working with Panavision Atlanta, Morgenthau chose to shoot Creed III with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics. “We shot all the dramatic scenes with T Series and C Series anamorphic lenses, and for the fights, which are in the 1.90:1 aspect ratio for IMAX, we used [prototype] spherical VA primes that we customized to add a bit more softness and help them match the look of our anamorphic lenses,” the cinematographer explains.
Morgenthau also shot certain sections of Donnie’s bouts with the Phantom Flex4K, whose high-speed capabilities enabled him to create an “ultra-slow-motion analysis of some of the major moments in the fights, where we wanted to be inside the boxers’ heads.”
Other cameras used included prep cameras to rehearse moves, “We prepped by shooting each fight with small digital cameras, and shooting sketches of what it should be, figuring out the most impactful places to place a camera and trying to show what it’s like to be in the ring from a boxer’s perspective.”
Director Michael B. Jordan and cinematographer Kramer Morgenthau on the set of “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan and cinematographer Kramer Morgenthau on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan and José Benavidez Jr. on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan on the set of “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan, Mila Kent and Tessa Thompson on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan, José Benavidez Jr. and Jonathan Majors on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan and cinematographer Kramer Morgenthau on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
The other big “artistic swing” was the unveiling of a new taller aspect ratio to give the fighters almost god-like statue. “In the film’s dramatic scenes, intimate glimpses of Donnie’s and Dame’s out-of-the-ring lives are framed for the 2.39:1 aspect ratio, but whenever a match is underway, the frame is expanded to 1.90:1 Imax. The filmmakers opted to shoot most footage for both aspect ratios with Sony Venice cameras certified by the ‘Filmed for Imax’ program,” Weinstein notes in American Cinematographer.
With up to 26% more picture, this third installment in the Creed franchise became the first sports-based film included in the “Filmed for IMAX” program.
“It was really exciting to be able to integrate the IMAX cameras into the filmmaking process, especially the way we used them to open the world up and to make it very immersive and visceral for the flight sequences,” says Morgenthau, according to a report by ReelAdvice. “And that’s how we chose to use it; there was just something very magical, especially the scene at Dodger Stadium, where MBJ is walking out onto the field and the image aspect ratio expands in shot and the black bars recede, and you get this really tall, beautiful, powerful image. It just elevates everything, there is just something hyperreal about it. And to be the first sports movie doing that, it was a creative high.”
Director and star Jordan, speaking to American Cinematographer, says “We were looking at these old photos of Muhammad Ali by Neil Leifer, and [we called] the shots that he would get of these outdoor fights ‘clouds to the canvas,’ where you can see everything in the frame. So, we just wanted to recapture that — get all that information up on the screen. Then, we’d ask, ‘Okay, when is it going to open up? When is it going to transition into that ratio?’ It was about picking those moments and balancing them.”
Morgenthau concluded with almost reverence for the sport and the fighters in an interview with Gary M. Kramer for Salon. “The way we photographed the bodies was like photographing sculpture. Their bodies are sculpted and beautiful, and covered in sweat and oil and very reflective. Shooting them was about how their bodies and faces were reflecting light and honoring their performances was showing them in their ‘best light,’ so to speak,” he said.
“I studied paintings by George Bellows, and the Ashcan school of painting was an inspiration. There was an Eakins painting in a museum in Philadelphia that I was looking at, and I referenced great boxing photography, like some of the Ali color photographs. These images inspired how we lit the boxers.”
When It’s All an Action Sequence: Editing “John Wick Chapter 4”
TL;DR
Director Chad Stahelski wanted to work with an editor who came with no preconceptions about how a John Wick action film should be put together.
Editor Nathan Orloff talks about being able to accomplish a fantastic rhythm, but over a near three-hour run time.
Stahelski discusses cinematic influences including “The Good, The Bad and the Ugly” and MGM musicals.
With John Wick 2 and 3 editor Evan Schiff unavailable, franchise director and co-creator Chad Stahelski cast around for a new cutting room collaborator for Chapter 4. He alighted on Nathan Orloff (Ghostbusters: Afterlife), in part because Orloff had limited experience editing action movies.
“In my interview with Chad, we just really hit it off,” Orloff explains on the Next Best Picture podcast. “I found out many months later that one of the reasons he wanted to bring me on is because I don’t have extensive experience in action. He didn’t want someone to come in and do their thing that they’ve been doing on other action movies… because John Wick is sort of antithetical to how a lot of action movies are cut these days.”
To understand why, you have to appreciate that Stahelski’s vision for the fourth installment in the franchise was to expand the John Wick universe by bringing in multiple storylines and a longer run-time to let the action play out on screen, rather than having the editing dictate the action.
“The other films are very much like, you know, that John is on a direct rampage or running for his life. This film was intentionally designed to be more reflective and contemplating, that after his entire career as a hitman, he is forced to reckon with his past and what he’s done.”
Stahelski’s influences range from the lush visuals of Wong Kar-wei to the operatic staging of Sergio Leone westerns. As the director explained to Jim Hemphill at IndieWire: “I love the seventies movie style. I love four act operas. I love Kabuki theater. The Asian cinema kind of breaks a lot of rules that we adhere to in the three act version [of movies] and we’d like to think John Wick breaks a lot of those rules because we do go a little operatic.
“Lawrence of Arabia is a good example like that. That movie kind of flies by to me and it doesn’t feel like you need an intermission in it.”
The filmmaker’s homage goes so far as to mimic the famous “match cut” by editor Ann V Coates in David Lean’s Lawrence of Arabia in which Lawrence in profile blows out a match and Coates cuts to a blazing desert sunrise.
“I remember vividly when I went to set in Paris, Chad asked me ‘what’s the most famous cut in all of cinema?’ and said we’re going to do it our way,” Orloff relates to Next Best Picture. “I wanted to make sure we did the exact number of frames when the fire was blown out before cutting to the sunrise. You know, I wanted to do it justice.
“He told me he’d rather swing and miss than do the same thing over again. And so that match cut is indicative of [telling the] audience what we’re going for.”
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Bill Skarsgård as Marquis in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Donnie Yen as Caine, Bill Skarsgård as Marquis and Marko Zaror as Chidi in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Donnie Yen as Caine in John Wick: Chapter 4. Photo Credit: Murray Close
Ian McShane as Winston, Lance Reddick as Charon and Clancy Brown as Harbinger in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Ian McShane as Winston, Keanu Reeves as John Wick, and Chad Stahelski – Director in John Wick: Chapter 4. Photo Credit: Murray Close
Director Chad Stahelski with Laurence Fishburne and Keanu Reeves on the set of “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Director Chad Stahelski and Bill Skarsgård on the set of “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Ian McShane as Winston and Bill Skarsgård as Marquis in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Donnie Yen as Caine in John Wick: Chapter 4. Photo Credit: Murray Close
Ian McShane as Winston and Chad Stahelski – Director in John Wick: Chapter 4. Photo Credit: Murray Close
Another acknowledged influence on the director’s action style are classic MGM musicals or those featuring Fred Astaire. In films like Singing In The Rain or Top Hat the camera stays generally static and in wide shot with minimal edits so the viewer can take in all the dancing brilliance performed by the film’s stars.
“I love Bob Fosse here, one of my huge inspirations,” Stahelski tells Indiewire. “You take Gene Kelly, the old Sunday Morning Sunday Parade or something like that. You watch Fred Astaire do his thing. And if you watch the way we shoot, it’s very simple. The way we train people [to perform stunts] is very, very, very dance oriented.”
Orloff elaborates on what this means to decisions in the cutting room.
“Musicals like back then were sort of like you edited around the dancing,” he says on an episode of The Rough Cut podcast. “You showed them dancing. They would do a move, finish, cut, start something else. And the way Chad talked about that really inspired me to do that with our characters and not use the editing to try to punch anything up.”
There are times when the stunt performance or that of Keanu Reeves aren’t quite perfect “they slip or there’s something not great about the timing of this or that, but not being so obsessive about perfection makes it just so much more real. When you’re cutting less, you’re able to absorb everything more. You feel more empathy for the characters because you feel like you’re just there.”
John Wick: Chapter 4 clocks in at 169 minutes, more than an hour longer than the original. Stahelski explains why he wanted a movie of this length.
“In our heads we knew that we wanted to show this constant decreasing circle that spirals closer and closer as [the stories] come together. So every act brings us closer together. That was the plan. It sounds like a very genius plan, but you don’t know until you cut the whole thing together. Our first cut was 3 hours 45 minutes.”
So how did the edit team set about cutting that down, and knowing which killing to leave in or excise?
“When you have 14 action sequences, you can’t just edit that sequence,” the director explained. “You’ll never know if a five minute car scene or a ten minute car scene is good to watch in the two and a half hour movie.
So the only way to truly know that you’re doing the right thing is step back and take that half day. We’d edit all morning but by four p.m. we’re like, Let’s watch the movie. And my editorial staff probably hates me. We’ve watched it so many times because literally even if we just took like 30 seconds out of something, I’d make everybody watch the movie again, because that’s the only way you know, you have the right pace.
He adds, “It’s the whole song that makes you rock out. I think that was a big learning experience with me and my editorial team to constantly watch a two and five hour movie and feel where the slow parts were and to work on those parts.”
Because John Wick is dispatching henchmen left and right in intricately planned and executed stunts, making the decision about what to cut was a tricky one admits the editor.
“There is definitely sometimes overkill when something is too similar to something else,” Orloff told Next Best Picture, “but going back to the music was a huge help in creating different tones and alternating what we were doing to avoid the things feeling the same. And to Chad’s credit, especially in the last act when we go from street fight to a car chase to a lengthy overhead shot that, even though the audience has watched non-stop action for 30-45 minutes the movie is structured so skillfully that you’re seeing something you’ve never seen before.”
“The Last Jedi” and “Glass Onion” director Rian Johnson on his new Peacock “howcatchem” series, “Poker Face,” starring Natasha Lyonne.
April 1, 2023
Posted
April 1, 2023
How “The Boy, The Mole, The Fox and The Horse” Won Hearts and Minds
TL;DR
Based on the bestselling illustrated book by Charlie Mackesy, the Oscar-winning animated short film “The Boy, The Mole, The Fox and The Horse” has been described as “‘The Little Prince’ for a new generation.”
The international animation team that brought the film to life spanned 20 different countries, with artists working remotely due to the pandemic.
The filmmakers wanted to retain the signature style of Mackesy’s ink and watercolor illustrations, with Mackesy closely involved in the process to ensure that the film stayed true to his vision.
The Oscar-winning short The Boy, The Mole, The Fox and The Horse is like receiving an “emotion bomb” when you first see it. If you have any pent-up sentiment left over from the pandemic, Charlie Mackesy’s animated story of a young boy and his animal friends might extract it from you, so be warned.
The award-winning animated story, now streaming on Apple TV+ and the BBC iPlayer, is the realization of Mackesy’s beautifully rendered ink and watercolor drawings, which were immortalized into an illustrated book that ended up topping the bestseller lists in both the United States and in the UK.
Filmmakers then approached Mackesy to take the story to the next level, but how do you turn heavily characterized pencil drawings into moving images and keep the signature style of the artist?
Initially, Mackesy’s intentions were less about the bottom line than more spiritual and Christian ambitions. He explained to Ryan Fleming at Deadline that helping people was his driving force and he thought the film would add to that.
That the book even became a hit shocked him, Mackesy said. “When the book came out, I got so many emails, like thousands of emails, telling me how the book had moved them or helped them, particularly in the pandemic,” he said. “I felt like if the book had done that, could a film reach people in the same way?”
He soon had his answer. After reading Mackesy’s book in 2019, producer Cara Speller said she “completely fell in love with it and got in touch with Charlie and his partner, Matthew Freud, and talked to them about what we could potentially do in turning it into a short film.” After a discussion with the creators, Speller contacted Peter Baynton, who was ready to join as director.
Speller told Jérémie Noyer at Animated Views how important it was to have Mackesy front and center in the process. “It was always really important to me right from the start that Charlie be at the center of any team that we put together to make the film. You can tell immediately from the book that he has incredibly strong instincts about what works. To me, it didn’t make any sense to try and make that without having him so closely involved.”
The animation team worked remotely because of COVID, with a shared goal of creating a look that reflected as closely as possible the drawings in the book, which were ink and watercolor. “We wanted to make those drawings basically move but keep the spirit of the fluidity of the ink and the line and the varying thickness of line,” Mackesy says.
From “The Boy, The Mole, The Fox, and The Horse,” The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+
From “The Boy, The Mole, The Fox, and The Horse,” The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+
From “The Boy, The Mole, The Fox, and The Horse,” The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+
Lead compositor Nick Herbert working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Art director Mike McCain working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Lead compositor Johnny Still working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Director Peter Bayton underlines the connection between Mackesy’s style and his animation team, “Charlie’s drawing is underpinned by a great knowledge of anatomy. So, even though he draws extremely quickly and quite impressionistically, you can tell he knows horse or boy or fox anatomy so well. For the mole, it’s a little bit different.
“It was important not to lose that lovely loose quality and make things stiff. So, we came down to a system where we would animate quite tightly on detailed models, and then, on the ink stage, we encouraged the artists to find that looser way of inking. It was about finding that very fine line that sort of drifts around the characters.”
“It was a very international crew,” noted Speller, “coming from 20 different countries. We started the work on the film in the middle of the Pandemic, so everyone was working remotely from their homes. We built the team in the same way you build any team on a production. You’re always looking for the most talented artists you can find; it doesn’t matter where they are in the world, as long as you think they’re the right fit for the project and for the team.”
“The style warrants movement,” said Gladstone, “but how did you achieve it? The line halo that goes around the drawings, how is that translated to movement?” Director Pete Bayton explained the significance of the halos: “Charlie describes those lines as thinking lines and they’re very characteristic of his drawings,” he noted.
“The process is that we start with pencil rough character animation, to define the performance and then it goes through a clean-up stage where we adjust the model where everything looks like a model and then we go to an inking stage where we do these key ink drawings and at that point we would add these lines, these thinking lines or ‘thinkies,’” he continued.
“It was a careful balance as sometimes that would feel too stiff and attached like a wire so we found a way of making them dissipate and reappear.”
Art director Mike McCain summed up Mackesy’s style and how it was transferred to motion. “Charlie has such a beautiful economy with ink and the book has such a minimal approach to storytelling and it’s just what’s needed on the page,” he said. “As we were looking to bring that wilderness to life the biggest challenges were finding how to add and not to over add. Just put what’s needed on screen to make it feel like you’re surrounded by this world.”
Variety’s Peter Debruge calls the short “The Little Prince for a new generation.” He goes on to add, “Beautifully adapted from British illustrator Charlie Mackesy’s international bestseller. Those who know the book — a Jonathan Livingston Seagull-esque life preserver for many during the pandemic — will appreciate how the team managed to translate Mackesy’s unique ink-and-watercolor style, with its distinctive blend of thick brushstrokes and loose, unfinished lines.
“Isobel Waller-Bridge’s gentle score coaxes audiences into a receptive place, while the quartet of Jude Coward Nicoll (the Boy), Tom Hollander (the Mole), Idris Elba (the Fox) and Gabriel Byrne (the Horse) lend sincere voice to various affirmational ideas,” Debruge continues.
“Cynics may dismiss what one acquaintance called its ‘bumper sticker wisdom,’ but they miss how vital it is to plant ideas of this nature in the heads of young viewers: boosting their confidence and unpacking what it means to feel lost — or seen — before social media brainwashes them otherwise.”
The 2023 NAB Show will host a conversation with the creative team behind short film “The Boy, the Mole, the Fox, and the Horse.”
March 28, 2023
Resilience, Remote Collaboration, and Creativity on “The Boy, the Mole, the Fox, and the Horse”
Watch the NAB Show session above.
TL;DR
The 2023 NAB Show will host a Main Stage conversation with the creative team behind Academy Award-winning animated short film “The Boy, the Mole, the Fox, and the Horse.”
Open to all attendees, the session “How to Win and Oscar With a Fully Remote Creative Team” will take place Sunday, April 16 at 2:00 p.m. and will feature visual artists for the production.
Art Director Mike McCain and Animation Senior Support Specialist Ben Wood will chat with session host Dave Leopold, strategic development director at LucidLink, about how cloud workflows allowed the film’s creatives to collaborate.
The 2023 NAB Show will host a Main Stage conversation with the creative team behind Academy Award-winning animated short film “The Boy, the Mole, the Fox, and the Horse.”
Open to all attendees, the session “How to Win and Oscar With a Fully Remote Creative Team” will take place Sunday, April 16 at 2:00 p.m. and will feature visual artists for the production.
Art Director Mike McCain and Animation Senior Support Specialist Ben Wood will chat with session host Dave Leopold, strategic development director at LucidLink, about how cloud workflows allowed the film’s creatives to collaborate.
“The Boy, the Mole, the Fox, and the Horse” first aired on the BBC in December 2022 to more than seven million live viewers.
McCain, who has worked with a variety of studios, has credits on “Spider-Man: Across the Spider-Verse” and “The Ghost and Molly McGee.” Before focusing on animation and painting, he directed video games.
Art director Mike McCain working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Lead compositor Nick Herbert working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Lead compositor Johnny Still working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Wood, who has more than nine years of visual effects industry experience, has worked at multiple VFX studios including Smoke & Mirrors, DNEG, and NoneMore Productions. He began his career as a post-house runner and then progressed to senior-level IT positions.
Leopold has held roles across the media and entertainment industry, including editor, motion graphics artist, producer and post supervisor. He most recently worked at ViacomCBS where he created content of all types. At LucidLink, he brings remote collaboration solutions to the global creative community.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
NAB Show will look at how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18.
March 28, 2023
NAB Show Leads an Exploration of the Evolving Internet
TL;DR
The 2023 NAB Show will explore the impact Web3 and other internet advances are having on the media and entertainment industry.
Attendees can learn about the next generation of the internet through educational programming, demonstrations, special events, networking sessions, and exhibitor participation on the show floor.
NAB Show will also feature the Intelligent Content Experiential Zone that will serve as a home base for attendees interested in new internet technologies. The area will allow visitors to participate in collaborative workshops and presentations.
NAB Show will explore the impact Web3 and other internet advances are having on the media and entertainment industry.
Exploration of the next generation of the internet at the 2023 NAB Show will include educational programming, demonstrations, special events, networking sessions, and exhibitor participation on the show floor.
“Web3 and other emerging technologies like generative AI and the metaverse are opening an entirely new chapter for content creators,” said Chris Brown, NAB executive vice president and managing director, global connections and events. “The 2023 NAB Show is the ideal platform to dive into these new tools and ideas by meeting face-to-face with the experts, innovators and companies that are unleashing the possibilities pushing the limits of our imagination.”
The 2023 event, which marks 100 years for NAB Show, takes place from April 15-19 at the Las Vegas Convention Center.
Event educational sessions will span multiple conferences and tracks, looking at key trends surrounding the next era of internet tech. Topics covered include Web3, data and analytics, generative AI, metaverse, and blockchain and NFTs.
“Web3 is a rapidly evolving technology, and the most successful companies will be those that are willing to experiment with new approaches and collaborate with other industry players,” said Andrea Berry, head of development at Theta Labs and a member of the NAB Show Web3 Advisory Council.
The Web3 Advisory Council, which offers guidance and expertise on NAB Show programming regarding the next generation of the internet, will provide an update on April 17 on the state of Web3, the impact of technology and the current economic and cultural trends that are driving the next phase of content.
NAB Show will also feature the Intelligent Content Experiential Zone, which will serve as a home base for attendees interested in new internet technologies. The area will allow visitors to participate in collaborative workshops and presentations with products from companies such as Interra Systems, Shotshack, Veritone and Wiland. The zone will also feature roundtables, theaters, the AWS Partner Village, the Innovation Village and NABiQ.
In collaboration with StoryTech, NAB Show will offer attendees guided, curated tours. Options include the Data, Data, Data tour, focusing on data and meta data management; the New Production Modalities tour, covering Web3, virtual production solutions and immersive content creation tools; and the Evolution of Video tour, showcasing the current state of video.
A variety of companies will be exhibiting new Web3 and other next-gen internet tech and solutions at NAB Show. These companies include Amagi, AWS, Digital Nirvana, Microsoft, Oracle, SDVI, SSimWave, TSV Analytics, Veritone and Vistex.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
A comprehensive plan for cloud-to-edge computing and connectivity will be important for high-performance consumer metaverse experiences.
March 26, 2023
Posted
March 24, 2023
The Art of the Prompt
BY ROBIN RASKIN
TL;DR
Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse.
The Kevin Roose shockwave, goaded every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms.
Educational institutions are trying to figure out whether to ban Generative AI or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.
Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse. The Kevin Roose shockwave, goaded every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms like Bing AI, Google’s Bard, and the wildly popular ChatGPT.
In the early days of ChatGPT’s general release, Cnet had some faux pas including plagiarism and misinformation seeping into its AI-generated journalism. This week Wired magazine very carefully spelled out its internal rules for how it will incorporate generative AI in its journalism. (No for images, yes for research, no for copyediting, maybe for idea generation.) Educational institutions are trying to figure out whether to ban it or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.
Meanwhile, folks like Evo Heyning, creator/founder at Realitycraft.Live, and author of a lovely interactive book called PromptCraft has been doubling down to dissect, coach, and cheer us into the world of using Generative AI effectively. The book, co-written with a slew of AI companions like Midjourney, Stable Diffusion,ChatGPT, and more, looks at the art, science, and lots of iterations that will help get the most out of the creative man/machine communications. You can watch some of her fast-paced Promptcraft lessons on YouTube. They’re kind of the AI-generative of the arti-isodes of Bob Ross on PBS.
A Magic Mirror for Collective Intelligence
Heyning has worked in AI, as a coder, storyteller, and world-builder since the early days of experimentation. She’s also been a monk, chaplain, and just about everything else that defines a renaissance woman who thinks deeply about AI. “Are the models merely stochastic parrots that spit back our own model? Or are they giving us something that’s a deeper level of comprehension?” she asks.
A prompt and resulting output using Midjourney.
“AI,” she continues, “is like querying our collective intelligence. Right now most of our chat tools are mirrors of everything that they’ve experienced. They’re closer to asking a magic mirror about collective intelligence than they are about any sort of unique intelligence.”
Our jobs are to learn the language or the query to coax the best out of the machine. “AI Whisperers,” those who can create, frame, and refine prompts are out of the gate with a valued skillset.
While prompts for generating text, images, movies, and music will vary, there are certain commonalities. “A prompt,” says Heyning, “starts by breaking down the big vision of what you’d like to see created, encapsulating it into as few words as possible.” She likens a lot of the process to a cinematographer calling the shots. “You’re thinking about what the focal point of your creation will be. The world of the prompt is about our relationships with AI, and it includes shifts in language that come from both sides, not just from the human side, but also from alternative intelligences.”
Five Easy Pieces
Heyning talks about her process of including five pieces in a prompt. They include the type of content, the description of that content, the composition, the style, and the parameters.
Content Type: In art prompts, the type of content might be a poster or a sticker. For text it might be a letter or a research paper.
Description: The description of the content defines your scene (a frog on a lily pond).
Composition: The composition is the equivalent of your instruction in a movie (frog gulping down a fly or in the bright sunshine).
Style: The style might be pointillism, (or for text the style of comedy writing).
Parameters: Finally, the parameter might be landscape or portrait, or, for text a word count.
Providing context is also a key component. Details about the setting, characters, and mood help you get the image you had in your mind’s eye. “Negative weights” — things that should not be in your creation can be important, too. Heyning discourages the use of using artists, especially living artists, names in the prompt. These derivatives beg copyright questions and remind us to use commas in our prompts to make them more intelligible to the machine. “They act as separators to help the generator parse a scene.”
Heyning’s quite the optimist about how humans and AI will work together, even in much-debated areas like education. “Kids are learning about art history from reading prompts created using Midjourney,” she marvels. “They are introduced to impressionism, realism and abstract art. They’re using terms like knolling (knolling is the act of arranging different objects so that they are at 90-degree angles from each other, then photographing them from above), once relegated to the realm of trained graphic designers.”
What I learned from my crash course in prompting? The power of a good prompt is the power of parsimonious thinking — getting to the essence of what you want to create. Similar to coding, but different, because you don’t need to learn a foreign language, this is a much more Zen-like effort. Stripping away all that’s unnecessary; down to the perfect phrase. (P.S. If you prompt ChatGPT to tell you how to write the perfect prompt you’ll read even more about the Art of the Prompt.)
Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art
Executives at the 2023 South by Southwest conference urge users to consider AI tools as helpers for human activities such as brainstorming.
March 23, 2023
Posted
March 14, 2023
Blinding Lights: Creating Cinematic Beauty for The Weeknd’s Concert Special
TL;DR
Both nights of The Weeknd’s spectacular 90-minute show at Inglewood’s SoFi Stadium in LA were recorded for an HBO concert special, “The Weeknd: Live at SoFi Stadium,” which is now streaming on HBO Max.
Director Micah Bickham employed roughly 25 Sony Venice cameras outfitted with Angenieux and Canon Cine zoom lenses to capture footage of the live concerts.
The production team partnered with a company called Live Grain to add texture and grain to the concert footage to emulate 35mm film stock.
Last November The Weeknd, aka Abel Makkonen Tesfaye, put on a spectacular 90-minute plus show at Inglewood’s SoFi Stadium in LA. Both nights were recorded for an HBO concert special, The Weeknd: Live at SoFi Stadium, which is now streaming on HBO Max.
It was the last stop of the first leg of the “After Hours til Dawn” tour, and Tesfaye pulled out all the stops to reinforce his performance artist handle but still confound and confuse the critics as to what music genre to place him in.
Direction was by Micah Bickham, whose collaboration with the artist goes back to the Starboy album in 2015. He talked to NAB Amplify about how the show was created, recorded and broadcast.
“My focus with The Weeknd is particularly around live production,” Bickham said. “We have quite the partnership really from the Starboy era around 2015. It’s been a handful of years just to understand the world they’re creating from an album point of view and how that translates in to live.”
SoFi stadium was primarily chosen for the recording as The Weeknd was doing two nights there. Both nights would be recorded and then cut together. “Just thinking how I wanted to shoot it and present it, we had to shoot across the two nights, plus a handful of pickups that we ended up doing. Also being LA, it was just perfect.”
The discussion before the show about how they wanted the concert film to look took quite a few diversions but a cinematic theme was always front and center. “We talked a lot about cinematic integrity. Yes it’s a concert and yes it’s an artist performing these songs but with a world being created and shaped inside of it,” he said.
“We talked about what the DNA and visual language of this film was but in the end for me it had a lot to do with how we presented it more like a film and less like a concert. What I mean by that is when you watch the film the pace and the tension that the pace creates is pretty unusual for a typical concert film.
“We wanted you to sit with the artist and digest what was happening right in front of you not through an edit and cut that might pull you away too quickly. We wanted you to live in it, when you see it there’s something that resonates differently than a typical concert film.”
The Weeknd’s live shows have already made headlines, especially his 2021 Super Bowl halftime show, which he had reportedly underwritten to the tune of several million dollars. But the show was to be nominated for an Emmy for Outstanding Variety Special (Live), Outstanding Lighting Design/Lighting Direction for a Variety Special, and Outstanding Technical Direction, Camerawork, Video Control for a Special.
The SoFi concerts were specifically staged to allow viewers see the most of The Weeknd. Tesfaye had the run of the center of the stadium with an apocalyptic Toronto skyline at one end and a huge suspended moon at the other. No sign of the band, who were hidden out of sight. Tesfaye was on his own apart from 33 dancers parading as red-shrouded sirens who walked as one.
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
Concert films can be free-running, sometimes allowing the camera positions to operate without timecodes, picking and choosing shots as they go. Bickham wanted a tighter regime for SoFi. “For this particular project there were a couple of differences, just because of the scale of it. It was important to me early on that if I just monitored the board and did a pure just shoot for capture and I didn’t create a line cut, my feeling was that we weren’t going to able to hold that many cameras accountable to each moment,” he said.
“So the way I directed it was a little bit of a hybrid in the sense that I did cut a line cut. When a director cuts a line cut there’s an immediacy that takes place from the operators that you’re working with. Everybody sits up a little straighter and there’s a little more tension than if I asked them to ‘just shoot and I’ll nudge you around.’
“Certainly there are times when that’s important and the best way to approach it. For this I felt creating a little bit of tension and immediacy was important so everyone stayed focused. It’s a long show, top to bottom it’s just under two hours. It’s an easy scenario even for the best team to kind of settle in and perhaps not necessarily to be on top of every single moment throughout the show. So yes, we cut it but with a series of pick ups too.”
These were mostly single close-up Steadicam shots featuring The Weeknd and the dancers. Adding them to the two nights at the stadium gave Bickham a substantial editing job, but inevitably it was all about finding the show. “We wanted to break it apart and understand the shape of the narrative and how we could build it in the edit.”
With around 25 Sony Venice cameras in play — both first- and second-generation, but mostly second — there was a lot of footage to work with. Lenses were primarily Angenieux and Canon Cine zooms, with a couple of prime lenses employed on handheld cameras. “They were all human-operated and were all on my Multiview and so we’re cutting the full volume of the 25 throughout the night,” Bickham detailed. “From night one to night two we augmented the positions of some of those 25 cameras just to enlarge out coverage. That gave us a slightly different mindset going in to night two. It would just accentuate what we had already done on the previous night.”
Designing the cut, it was always planned to let it breathe, especially around Tesfaye who was mostly alone in such a huge space. Bickham explains the thinking: “It was partly because it gives the audience an opportunity to be on stage with him. That’s a very unusual experience especially for a stadium film. Additionally by doing that it creates a tension. The audience are expecting you to cut, they’re expecting to be moved on to something wide or something different and when you don’t do that and you stay kind of locked in to that position something really interesting happens; it makes the next shot that much more impactful.
“So in other words, we kind of lingered even if the song ramped up and became more manic. We didn’t let the pace of the moment dictate the pace of the film.”
Bickham and Le Mar Taylor, The Weeknd’s creative director, had talked a lot about letting moments develop in front of the lens and not blasting through the coverage. “We wanted our performance to be more like a film edit.”
The concert film was meant to be celebratory career moment for Tesfaye and the means of capture was always up for discussion, with even IMAX put forward as a medium. “We did think about using 35mm film, in fact through our discussion we did end up partnering with a company called Live Grain,” Bickham recounted. “We wanted the concert film to have representation of the texture of film to push in to a space where you don’t particular see it. So that was a huge part of our decision making even through the grade and the finishing. It’s got a timelessness with this textural element to it and just feels different from a typical concert picture.”
The use of the Live Grain process is usually for digitally shot movies. NAB Amplify previously reported on Stephen Soderbergh’s No Sudden Move using the process, but for a live production it’s new.
“The cameras didn’t have any filtration in place just to make sure the process wasn’t disrupted. Live Grain is essentially real time texture mapping. A lot of great films that were shot digitally used Live Grain to make it feel like it’s 35mm. In a multi-camera almost two hour production 35mm itself isn’t really practical with the mag changes and the amount of film you use.”
The use of Live Grain was in fact introduced by HBO, which has an ongoing relationship with the company. “It’s been tested by them many times on films but our film was maybe the first time being used for a live concert application or certainly one of the first.”
The post effect of film grain really nails the timeless cinematic effect, but was there ever an option to broadcast the concert live? “There was a time when we considered a one day IMAX special but when HBO got involved we realized we had a great partner for what eventually we wanted to do and it tied in with the upcoming The Idol drama series.
“Ultimately we were able to bring a more caring approach to it, we could take our time.”
Looking to stay ahead of the curve in the fast-changing world of live production? Learn how top companies are pushing the boundaries of what’s possible in live events and discover the cutting-edge tools and technologies for everything from live streaming and remote workflows to augmented reality and 5G broadcasting with these fresh insights from NAB Amplify:
Kendrick Lamar’s “The Big Steppers: Live from Paris” employed multiple digital cinema cameras to deliver a livestreamed outdoor broadcast.
March 26, 2023
Posted
March 10, 2023
Devoncroft Executive Summit Set For April 15 at NAB Show
TL;DR
The 2023 Devoncroft Executive Summit will take place April 15 in Las Vegas.
Running from 12 p.m. to 6 p.m. on the NAB Show Main Stage, the conference will feature speakers and moderated panel sessions with C-level executives.
This year’s event, with the theme “The Business of Media Technology”, will bring together thought-leaders from across the media technology sector.
The 2023 Devoncroft Executive Summit will take place April 15 in Las Vegas.
This year’s event, with the theme “The Business of Media Technology”, will bring together thought-leaders from across the media technology sector to discuss the issues facing the industry, share best practices and network with peers.
Running from 12 p.m. to 6 p.m. on the NAB Show Main Stage, the conference will feature speakers and moderated panel sessions with C-level executives from broadcasters, service providers, media technology vendors, and IT vendors.
The BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians and media technology managers.
March 6, 2023
YouTube Unveils 2023 Priorities As Shorts Monetization Struggles, Plus TikTok’s Surprising New Feature for Teens
By JIM LOUDERBACK
TL;DR
Neal Mohan reveals YouTube’s creator-centric priorities while Shorts monetization lags.
TikTok rolls out time limits for teens while the U.S., Canada, U.K. and the EU ratchet up the pressure.
Twitch’s never-ending creator problems, the surprising upside of paid verification, a call to restrict AI research and new crypto and consumer research.
This Week: Neal Mohan reveals YouTube’s creator-centric priorities while Shorts monetization lags. TikTok rolls out time limits for teens while the U.S., Canada, U.K. and the EU ratchet up the pressure. Plus, Twitch’s never-ending creator problems, the surprising upside of paid verification, a call to restrict AI research and new crypto and consumer research. It’s the first week of March and here’s what you need to know. Oh, and how’s that “in like a lion” working out for you?
New YouTube Chief Lays Out Priorities: A few weeks late, but Neal Mohan has laid out YouTube’s 2023 priorities in a blog post. There’s not a lot new – the most important message was Mohan’s strong endorsement of getting creators paid. Mohan did announce new AI tools – about time – although with “guardrails”. I think that means it’ll be a while before we see anything useful.
Trouble at Twitch Amidst Abundance: Congrats to Kai Cenat for becoming Twitch’s biggest streamer. His month-long subathon – a throwback to the original Justin.TV mission – pushed him over 300,000 subscribers. But it also renewed calls for Twitch to properly compensate creators as Drake suggested he get a $50M payout. Cenat, who just signed with UTA, seemed to agree. Could this be a Ninja repeat all over again? Twitch is trying to do better by creators at least in some ways. For example, the new “experiments page” provides transparency to streamers and provides an interesting lens for the Twitch curious (like me) too.
The Upside of Paid Verification: Although many (including me) decried the paid verification initiatives at Twitter and Meta, a few experts see a silver lining. Brendan Gahan sees a lessening of sensationalist clickbait stories and a renewed focus on quality content and user experience. Alex Kantrowitz goes even further, positing that because most platforms are now dominated by professional creators, it’s time for them to pay for the privilege of making money. I think Gahan’s vision is idealistic but unrealistic, while Kantrowitz ignores the paltry creator middle class that will likely pony up for the check. Decide for yourself – both takes are worth reading.
Thanks so much for reading and see you around the internet. Send me a note with your feedback, or post in the comments! And see you at SXSW!
Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
For more on Mohan’s priorities, TikTok’s teen time limits, Jellysmack’s OTT plans and AI’s copyright dilemma, check out this week’s Creator Feed – the weekly podcast Renee Teeley and I produce – get it on Apple Podcasts, Spotify or Stitcher!
Three different topics impacting the Creator Economy: The ban or sale of TikTok, Meta’s latest layoffs, and the release of GPT4.
March 3, 2023
NAB Show’s BEIT Conference Dives Into Media Delivery
TL;DR
SMPTE President Renard Jenkins will deliver the opening keynote at the NAB Show Broadcast Engineering and IT (BEIT) Conference on April 15 at 10 a.m.
Running from April 15-18, the BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians, media technology managers, contract engineers, broadcast equipment manufacturers and distributors, engineering consultants, and R&D engineers.
The conference is produced in partnership with the Society of Broadcast Engineers, the Society of Cable Telecommunications Engineers and the North American Broadcasters Association.
SMPTE President Renard Jenkins cr: NAB Show
SMPTE President Renard Jenkins will deliver the opening keynote at the NAB Show Broadcast Engineering and IT (BEIT) Conference on April 15 at 10 a.m.
Jenkins, currently senior VP of production integration and creative technology services at Warner Bros. Discovery, has spent more than 35 years in the television, radio, and film industries.
Running from April 15-18, the BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians, media technology managers contract engineers, broadcast equipment manufacturers and distributors, engineering consultants and R&D engineers.
“The BEIT Conference is the place for media professionals to discover the latest breakthroughs helping to make the content pipeline more effective, efficient and expedient,” said Sam Matheny, NAB executive vice president, Technology and chief technology officer. “We are looking forward to an impressive lineup of presentations at NAB Show that will provide our community with real-world insights into keeping pace with the rapid evolution in how content gets delivered.”
The conference is produced in partnership with the Society of Broadcast Engineers, the Society of Cable Telecommunications Engineers and the North American Broadcasters Association.
BEIT will also feature the presentation of technical papers on topics including NextGen TV, artificial intelligence, data analytics, cybersecurity, media workflows, innovation in radio, media in the cloud, hybrid radio, sustainability, streaming, 5G and video coding, among others. The papers will be included in the BEITC Proceedings, which will also be released by PILOT, the innovation wing of the National Association of Broadcasters, on April 15. For the full schedule, click here.
For more information on the BEIT Conference, click here.
NAB Show is the preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry.
March 3, 2023
NAB Show Is Immersed in… Immersive Storytelling
From Dreamscape Immersive’s “Dragons Flight Academy” experience
TL;DR
NAB Show will explore how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18 at 1 p.m. at the Las Vegas Convention Center.
The session, titled “Immersive Storytelling: Expanding Audiences with XR in Games, Education, and Location-Based Entertainment,” will feature leaders in advanced technology.
Panelists include Aaron Grosky, president and COO of Dreamscape Immersive and COO of Dreamscape Learn; Ted Schilowitz, futurist, Paramount Global; and Jake Zim, senior VP, virtual reality, Sony Pictures Entertainment.
Dreamscape’s Aaron Grosky (l), Paramount Global’s Ted Schilowitz (c), and Sony Pictures Entertainment’s Jake Zim (r) will participate in a Main Stage session on immersive storytelling during the 2023 NAB Show.
NAB Show will explore how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18 at 1 p.m. at the Las Vegas Convention Center.
Immersive experiences have become easier to access than ever before. From headsets at home and in schools to location-based entertainment venues, consumers are embracing innovative ways to find their favorite stories.
Today’s entertainment technology has the ability to make every player the main character in their favorite worlds, expanding the universes they love and breathing new life into these stories and characters. We now have the ability to immerse our audiences’ minds into infinite architectures—from blasting ghosts in the Ghostbusters universe to teaching biology by having students solve the mystery of a dying species at an intergalactic wildlife sanctuary.
Sony Pictures Virtual Reality’s “Ghostbusters: Rise of the Ghost Lord” VR game
Sit down with our panelists as they discuss increasing convergence between traditional entertainment and advanced technology; how nostalgia fuels new technology adoption; and what’s next for VR/AR/XR in the entertainment industry.
Grosky oversees the creation of VR adventures for Dreamscape Immersive and Dreamscape Learn. The adventures are designed to give users the experience of watching a story unfold around them as they explore cinematic worlds, characters, and creatures. He previously served in strategic leadership and creative development roles for entertainment ventures focused on television, radio, music, online, and mobile productions.
Schilowitz, the first-ever futurist-in-residence for Paramount Global, works with the company’s technology teams to explore new and emerging technologies, including VR and AR. He previously served as consulting futurist at 20th Century Fox and worked on product development teams that have produced ultra-high resolution digital movie cameras, advanced hard-drive storage products, and desktop video software.
Paramount’s VR experience for “A Quiet Place
In his role at Sony Pictures Entertainment, Zim oversees global VR production and strategy for the motion picture group. He has produced a variety of interactive projects released across a spectrum of distribution channels. In addition, he works across business units to develop partnerships with technology and production companies in the emerging immersive entertainment space.
In the future photoreal synthetic humans will be a regular part of our day-to-day lives, and in China we can already catch a glimpse of this in action.
From brand ambassadors to virtual live streamers and virtual tour guides, digital human beings have become commonplace in China, not only in cyberspace but also in real life where their presence is growing.
These digital avatars are known as “meta-humans.” The term should be distinguished from a “posthuman” or “transhuman,” defined as an individual who has enhanced their physical and cognitive abilities beyond what is considered normal for a human being.
Epic Games also has software for creating realistic digital humans called MetaHuman.
In China, meta-humans are described as digital characters of such photorealism that they are getting to the point of being indistinguishable from real life.
Dao Insights, owned by London and Shanghai-based creative agency Qumin Group, says such digitized humans are at the core of China’s metaverse ambitions.
The country’s first hyper-realistic meta-human is called AYAYI, created by Chinese tech company RM Group.
In an interview with Dao Insights, RM Group co-founder Nicky Yu explains that there are two main applications for virtual humans: functional ones that might serve as the automated face and voice of virtual assistants for companies in hotels or banks, for example; and those intended for more creative media and entertainment-based ends.
These so-called IP-oriented virtual humans include anime-based characters and hyper-realistic humans, like AYAYI.
According to Yu, the commercial model of IP-oriented virtual beings is rather unstable. “Just as every movie can’t be a hit at the box office,” he says.
The value of the more service-oriented avatars depend as much as anything on how capable its AI is together with its cost, “whereas the appearance of the creation is less relevant,” according to Yu.
Virtual idol group A-Soul, courtesy of Keep
Virtual idol group A-Soul, courtesy of Keep
Virtual idol group A-Soul, courtesy of Keep
Virtual idol group A-Soul, courtesy of Keep
Virtual idol group A-Soul, courtesy of Keep
Virtual idol group A-Soul, courtesy of Keep
He describes creating a meta-human as similar to the production of a movie.
“We started with a script outlining the character’s persona and created a sketch based on those pre-set personalities and then modelled it. Once we were satisfied with the modelling, we launched a market survey, collecting feedback from the public to see if they think the appearance matches the persona we created. After that, we further polished and enriched the design of the character.”
It took RM Group just half a year from initial design to finish to deliver AYAYI.
One reason for the popularity of virtual stars in China, perhaps similarly to South Korean culture, too, is that they are insulated from celebrity scandal.
“In recent years, there has been a sense of disappointment and betrayal arising amongst the fan base. As a result, some fans have stopped following stars,” Yu explains.
“Whereas the image of virtual influencers is more controllable and they are always free from scandals. Therefore, they are a much safer option compared to their human counterparts.”
Brands can exploit the malleability and scandal-free persona of a virtual “idol” to engage customers by engendering an “emotional touch and maintaining a strong loyalty amongst fans.”
This is a classic extension of digital marketing.
Yu emphasizes that it is the story and content curated around digital characters that bring real impact for brands on their target audience.
“For example, if a digital human being can create music, which is powered by AI and is liked by audiences, then people are more likely to endorse the virtual being because of the work. Here, AI-generated music is the medium where digital characters can communicate with the public and further establish a relationship with them.”
The metaverse industry in China’s financial hub Shanghai alone is set to hit 350 billion RMB ($52.3 billion) by 2025.
Industrial parks in the city include two dedicated to the metaverse, two focusing on the digital economy, and three designated for intelligent terminal technology, “creating a comprehensive ecosystem that would enable the facilitation of the multi-layered virtual world.”
Yu says RM Group plans to integrate digital assets closer with the physical world, “strengthening the connection between the virtual and real spheres,” and believes the concept of meta-humans has barely scratched the surface.
“I believe a digital life will be a crucial component of virtual human beings in the future,” he says. “When each of us has a digital twin who can understand us in cyberspace or a robotic likeness to conduct daily activities and socialize in the virtual realm, that’s when we can say the era of digital humans and the metaverse has come.”
The metaverse may be a wild frontier, but here at NAB Amplify we’ve got you covered! Hand-selected from our archives, here are some of the essential insights you’ll need to expand your knowledge base and confidently explore the new horizons ahead:
Could South Korean K-pop singers competing as digital avatars in a virtual universe point the way to the future of entertainment?
May 4, 2023
Posted
February 27, 2023
Jim Louderback: YouTube Makes Global Domination Easier for Creators, Just as the Extreme Dangers of Social Media Are Revealed
By JIM LOUDERBACK
TL;DR
YouTube makes it easier to add multiple language tracks to videos – great news for creators and viewers alike.
Social media may be unhealthy for teens, especially for girls and regulators may step in. The industry needs to step up and address this.
AI-generated images are now essentially open source, mainstream media discovers creator-first brands and an AI video generator that actually works today.
This Week: YouTube makes it easier to add multiple language tracks to videos – great news for creators and viewers alike. However, social media may be unhealthy for teens, especially for girls and regulators may step in. Also, AI-generated images are now essentially open source, mainstream media discovers creator-first brands and an AI video generator that actually works today. Also I’m excited to welcome “wndr” as our sponsor this week, a cool new app that helps travel creators monetize their passion. It’s the end of February and here’s what you need to know now!
YouTube Opens Up Videos To The World:Nuseir Yassin (Daily Nas) has been telling creators to embrace other languages for years, saying that “80% of the world doesn’t speak English, so if you only make content in English, you are only talking to 20% of the world.” Now YouTube is making it easier to add multiple language tracks to a video. Jimmy Donaldson’s (Mr. Beast) company has been a leader here as well. He worked with Unilingo on his first Spanish language channel, and subsequently ramped up his own internal dubbing capability. Great news also for Papercup, an early leader in delivering AI-generated translations that preserve the cadence and voice of the source material. As Nuseir says, “You should localize your content because a kid in Egypt deserves to hear you just as much as a kid in Wisconsin”. Props to YouTube for making that easier for creators AND viewers.
Social Media Unhealthy For Teens: We’re seeing more and more evidence that social media is really bad for teen girls. Given that tweens and teens live on social, this is a problem. The industry needs to step up and address this – but I expect regulators to step in as well. China is leading here, as the local version of TikTok limits kids 13 and under to 40 minutes a day – and online gaming is restricted as well. Pinterest CEO Bill Ready is taking the lead in the U.S. as the company builds on its reputation for being a safe space. Expect this issue to only grow over the coming months.
AI-Generated Images Are Open Source: That’s right – AI-created images cannot be copyrighted. Eric Farber, founder of Creators Legal told me he wasn’t surprised, because “original works can be copyrighted if they are human created, not machine created.” This has huge ramifications for creators as it moves into chat results, video, 3D models and other areas. I also wonder just how much you would need to customize a GenAI creation to make it protectable. Farber responds that there’s “a lot left to play out. The most significant thing is that the copyright act hasn’t been truly updated in years and just doesn’t cover our world today.”
Mainstream Media Discovers Creator-First Brands: This Washington Post story on KSI and Logan Paul’s Prime brand concludes that community and cult will drive new brand development over the next 10 years. There are many more examples beyond Prime, but also beware of the cautionary tale that is Tesla. Elon Musk – arguably the world’s biggest influencer – drove Tesla to the top but is now destroying both Tesla and Twitter with his flailings. Cathy Hackl posts that creators shouldn’t be “afraid to launch new things”. But trust and community can be fleeting. If you launch a brand, be very careful that you don’t screw it up.
AI Generated Video Finally Arrives: Video generation platform Wochit released an AI experiment last week, which uses a ChatGPT-like AI to generate a surprisingly good video based on a 1-2 sentence description. This current version lacks a voice track but draws on Wochit’s decade-long storehouse of images, b-roll and attractive templates to build short but compelling videos that are ready for posting to Facebook, YouTube and other platforms. Read my deeper analysis and watch my first video for more insight. Full disclosure, I was on Wochit’s advisory board 7 years ago, but have no connection to the company today.
SPONSOR: Introducing wndr – the best way for travel creators to convert followers into hotel bookings. With wndr, creators can customize, personalize and connect their own travel storefront with over a million hotels worldwide, and offer discounted hotels to their followers directly from their social media profiles.
Wndr is revolutionizing travel creator monetization by democratizing the power of global booking platforms to creators, allowing them to generate up to 10% off every booking made on their wndr page.
Interesting essay on why fandom isn’t a job, with a bonus 8-year-old map of Tumblr. Oh, and Tumblr turned blue into green with a Twitter verification parody that hilariously actually made a bunch of money.
Tip of the Week: Setting up a Discord server is a non-trivial task with lots of pitfalls. But the Communityone newsletter just finished its three part series on how to set up Discord to perfection. Read Part 1 here (ht Brendan Gahan).
What I’m Watching Playing: Finally beat Pokémon Violet last week. Super fun for open-world gamers even if you’re not a Pokefan.
Thanks so much for reading, and see you around the internet. Send me a note with your feedback, or post in the comments! Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
From Instagram’s new messaging platform to Susan Wojcicki’s YouTube exit, Jim Louderback has all the details.
May 4, 2023
Posted
February 22, 2023
Jim Louderback: Instagram’s Shockingly Awesome New Messaging Feature and the Platform That’s Turning Everyone Into a Creator!
BY JIM LOUDERBACK
TL;DR
Lots of chatter about YouTube’s long-time CEO stepping back last week. Some of the best: OG Creator Economy exec Leslie Morgan decries the loss of women leadership in Silicon Valley and the more bro-ish content direction YouTube has seen recently.
Roblox envisions everyone as creator, says CTO Daniel Sturman. They plan on using generative AI to allow every user to create items, skins, clothing, and even full-on experiences on the platform.
We need a word for AI anthropomorphism. Because that’s what this article is. And that’s what Ben Thompson, author of Stratechery, has spent countless hours falling victim to.
This Week February 21, 2023: What Susan Wojcicki’s departure means for the creator economy, all about Instagram’s innovative new messaging feature that lets creators talk directly to their fans, how Roblox wants to turn everyone into a creator, the weird anthropomorphism roiling the chatbot wars, and TikTok’s efforts to keep creators and increase traffic. It’s the end of February 2023 and here’s what you need to know.
YouTube CEO Wojcicki Steps Down: Lots of chatter about YouTube’s long-time CEO stepping back last week. Some of the best: OG Creator Economy exec Leslie Morgan decries the loss of women leadership in Silicon Valley and the more bro-ish content direction YouTube has seen recently. Longtime YouTube exec Priscilla Lau shares her experiences working with Wojcicki over the past 15 years and anticipates uncertainty to come. I took a look forward at YouTube under its new leader, former head of product Neal Mohan. I’m an optimist here, but I share Morgan’s concerns too – and hope Mohan is as creator-forward as Wojcicki was.
Instagram Messaging Arrives: Matt Navarra calls it “the best new feature in years”, as Instagram adds “Broadcast Channels” to the platform. Now creators can broadcast directly to followers with telegram-style messaging. Dylan Harari thinks this is a game-changer for creators who want to own their audience, because it works where that community already congregates. I agree, as it leans into the private communities that seem to be supplanting big unwieldy social platforms for GenZ and GenA. Alas it’s not broadly available yet – I tried to join Zuck’s “Meta Channel” but I’m still outside looking in. Related – Instagram and Facebook will start charging for verification. Uggh.
Everyone Will Be a Creator: That’s Roblox’s vision, as laid out by CTO Daniel Sturman. They plan on using generative AI to allow every user to create items, skins, clothing, and even full-on experiences on the platform. It’s not easy, given that items will need properties and characteristics that allow them to operate in a 3D world. I love the vision of pairing users with AI tools to turn everyone into a creator. Roblox is crushing it – their quarterly results were on fire with spending on Robux up significantly. And as we talked about last week, their daily usage among kids is almost 2x TikTok – and 17-24 year olds are using it more too.
Uncovering the Chatbots of Dawn: We need a word for AI anthropomorphism. Because that’s what this article is. And that’s what Ben Thompson, author of Stratechery, has spent countless hours falling victim to. From Bing to Sydney to Venom and Fury, Thompson has been uncovering NPCs inside of Microsoft’s chat engine. We definitely need the GenAI version of Asimov’s three laws of robotics. Because even anthropomorphism can get the vulnerable into trouble. Done right, though, these chatbots of dawn could be a tremendous force for good. How quickly the backlash has swelled.
TikTok Moves to Keep Creators, Boost Traffic: As the U.S. rattles its sabers, TikTok doubles down on creators and hopes to forestall a traffic slump too. Kaya Yurieff posted a number of scoops last week, including how growth is slowing, the company is readying a new fund that promises higher payout and a new video paywall too. Creator Fund 2.0 will likely limit itself to mid-level and above creators and may add other requirements as well. And in a related development, TikTok is launching an HQ Trivia clone to help promote a new movie – and perhaps juice engagement too. My take? I’m not a fund fan and creator payout will likely disappoint. And probably won’t keep creators from defecting at scale. I’m also bearish on a $1 paywall, but perhaps longer videos will juice the ooze. I do like the trivia feature – one that Lionsgate probably paid a boatload for.
— SPONSORED: CREATOR SQUARED COMING TO NAB —
So excited to be working with Robin Raskin to bring a creator focus to this year’s NAB Show. Just as new tools and a creator-first focus are transforming broadcasting, creators are building infrastructure that borrows from traditional media but with an iterative and innovative twist. I’m creating workshops, roundtables and discussions for Creator Squared that connect the dots between the two worlds. And I’ll be emceeing live! Want to get involved? Contact Gigi@virtualeventsgroup.org or me!
New per-post revenue analysis reveals the surprising state of creator earnings. Twitch is the most lucrative.
February 19, 2023
Posted
February 19, 2023
That’s How You Do It: Sam Pollard on Making “Bill Russell: Legend”
TL;DR
When former HBO Sports President Ross Greenburg approached Sam Pollard two years ago about doing a documentary on NBA legend Bill Russell, Pollard jumped at the chance.
Bill Russell: Legend premieres on Netflix February 8 and includes the last interview with Russell, an 11-time NBA champion with the Boston Celtics.
The two-part documentary, directed by Sam Pollard (MLK/FBI, Mr. Soul!), weaves interviews with archival footage and excerpts from Russell’s memoirs, to tell the basketball legend’s story.
When former HBO Sports President Ross Greenburg approached Sam Pollard two years ago about doing a documentary on NBA legend Bill Russell, Pollard jumped at the chance.
“I didn’t hesitate. I said yes, because I grew up in the ’60s,” Pollard told WNYC’s Allison Stewart. “I was very familiar with Bill Russell. I was familiar with the rivalry between Bill Russell and Wilt Chamberlain. I was excited to jump in and do this documentary.”
Bill Russell: Legend premiered on Netflix February 8 and includes the last interview with Russell, an 11-time NBA champion with the Boston Celtics. Russell died during the filmmaking process at his home in Mercer Island, Washington on July 31, 2022. He was 88.
The two-part documentary, directed by Pollard (MLK/FBI, Mr. Soul!), weaves interviews with archival footage and excerpts from Russell’s memoirs, to tell the basketball legend’s story. Corey Stoll narrates while Jeffrey Wright reads the excerpts from Russell’s memoirs.
Pollard told Clint Krute during an episode of the Film Comment podcast that one of the biggest challenges “was to say to ourselves, ‘when do we have too much basketball? When do we need to stop and go to something that he was doing off the court?’ And then when we got to his activities off the court, the question we had to ask ourselves was, ‘how long did we stay with that material before we get back to the basketball?'”
The director added that the classic narrative structure they originally had after the tease was to follow Russell’s life chronologically. But Pollard said they decided to show Russell getting drafted in 1956 by the Celtics instead “to create the drama.”
“[Y]ou [Pollard] play with a bit of the established or traditional sports documentary time structure by reversing what we would usually think was gonna happen after the tease that we would start with the origin story narrative,” scholar Samantha Sheppard said during the Film Comment podcast with Pollard and Krute. “But you move us and shift us along and then take us back to more of a familial historical narrative in that sense. And I think that in that way, in watching this, it’s like a trick. Often it does feel quite traditional. It feels even with the time change, still quite chronological at times.”
From “Bill Russell: Legend,” Cr: Netflix
Bill Russell, member of the University of San Francisco basketball team, shows how he scores baskets on Feb. 23, 1956. The 6-foot, 10-inch center, ranked one of the best, has helped his team win 20 straight games during the current season. cr: AP Images/Courtesy of Netflix
Bill Russell in Bill Russell: Legend. cr: Netflix
Bill Russell in ‘Bill Russell: Legend’. cr: Netflix
(L) Bill Russell in ‘Bill Russell: Legend’. cr: Library of Congress/Courtesy of Netflix
Bill Russell of the Boston Celtics is shown in 1968. cr: AP Photo/Courtesy of Netflix
Boston Celtics’ coach Bill Russell is seen during a press conference in Boston, April 18, 1966. cr: AP Images/Courtesy of Netflix
President Barack Obama reaches up to present a 2010 Presidential Medal of Freedom to basketball hall of fame member, former Boston Celtics coach and captain Bill Russell, Tuesday, Feb. 15, 2011, during a ceremony in the East Room of the White House in Washington. cr: Charles Dharapak/AP Images/Courtesy of Netflix
Bill Russell of the Boston Celtics is shown in 1968. cr: AP Images/Courtesy of Netflix
Sheppard, an associate professor of cinema and media studies in the Department of Performing and Media Arts at Cornell University, authored Sporting Blackness: Race, Embodiment, and Critical Muscle Memory on Screen, which explores sports documentaries and how they represent blackness.
“It [sports documentaries] finally gives these athletes larger context. It lets them speak, it lets them be culturally and critically framed, and it lets them, it lets us as audiences see their sport not divorced from the sociality in which they live,” said Sheppard. “So it’s not a narrative of shut up and dribble, it’s actually ‘tell us more and also show us the sport at the same time.’ So these films become really, really important as a way to provide a greater context to black athletes in ways that we have not seen them on the court, and more particularly off the court in terms of their social or cultural impact.”
Russell played with the Boston Celtics from 1956-1969. During Russell’s career, he scored a long list of achievements, including 11 NBA championships with the Boston Celtics (two of those as a player/coach), 5 NBA Most Valuable Player awards, 12 NBA All-Star games, two NCAA championships, and an Olympic Gold medal.
“What’s interesting about Russell is from one perspective, he seems like this imposing, 6’9 center for the Boston Celtics. Winner, winner, winner, right? But there’s the other side to Bill Russell where he’s extremely thoughtful,” Pollard told Esquire’s Alex Belth. “He’s extremely nuanced about everything in life, not only as a basketball player but as a Black man in America. And he had opinions about everything.”
Renee Montgomery in ‘Bill Russell: Legend’. cr: Netflix
Isiah Thomas in ‘Bill Russell: Legend’. cr: Netflix
Larry Bird in ‘Bill Russell: Legend’. cr: Netflix
Walt “Clyde” Frazier in ‘Bill Russell: Legend’. cr: Netflix
Jalen Rose in ‘Bill Russell: Legend’. cr: Netflix
Bill Walton in ‘Bill Russell: Legend’. cr: Netflix
Bill Bradley in ‘Bill Russell: Legend’. cr: Netflix
Oscar Robertson in ‘Bill Russell: Legend’. cr: Netflix
Bob Cousy in ‘Bill Russell: Legend’. cr: Netflix
Tom “Snatch” Sanders in ‘Bill Russell: Legend’. cr: Netflix
Off the court, Russell was very involved in the Civil Rights movement, attending the 1963 March on Washington with Dr. Martin Luther King and the 1967 Cleveland Summit as well as speaking out against the Boston bussing issues.
“This man was a real activist,” Pollard told NECN’s Clinton Bradford. “He didn’t just want to be known as a great basketball player, which he was, he wanted to be known as a human being who was well rounded, who had other things on his mind and other issues he wanted to articulate and talk about.”
Pollard wouldn’t have been able to tell Russell’s story without the mountain of archival footage, stills, and articles dug up by archival producer Helen Russell.
“Documentary filmmaking is really being like an anthropologist,” Pollard told Film Comment’s Krute. “You’re doing a deep dive, you’re doing a tremendous amount of research. And the more research you do, the more you find gold, you really find gold.”
But because Russell played in the 1950s and 1960s, some of that footage wasn’t the greatest.
“The one challenge that we as documentarians always face is that when you see this old footage, you say, ‘It looks pretty crappy, and there wasn’t a lot of coverage,'” Pollard told Variety’s Addie Morfoot. “So, you have to sort of take a leap of faith. [We looked at the archives] and would say – ‘Is that Bill Russell?’ But we also knew that we were never going to get the same kind of coverage and quality we see today.”
Even with the at times grainy footage, Pollard still managed to weave a narrative that makes Bill Russell: Legend stand out.
“What helps set the documentary apart is that Pollard has assembled a treasure trove of vintage game footage and vintage interviews, as well as a wealth of new or new-ish interviews with Russell, [Bill] Cousy, Satch Sanders and many of their contemporaries including the aforementioned [Jerry] West, Bill Bradley, Walt Frazier and more,” wrote The Hollywood Reporter’s Dan Fienberg. “There’s a very good balance between the game footage, which accentuates Russell’s grace and athleticism, and the interviews, which concentrate on his intensity and, perhaps more than anything, his intellect.”
Russell was a student of the game, spending hours studying.
“[H]e understood that the game of basketball is just not about being physical, it’s about being mental,” Pollard told WNYC’s Stewart. “It’s about understanding how to position yourself and play against other players, where you should be, where one of your teammates should be to get the ball to take it down to court to get a basket, to know when to get a rebound and where to get the rebound and how to use that. When he was at USF with his future teammate, K.C Jones, they came up with the strategies. That’s what they would call themselves rocket scientists.”
Pollard added: “They were really thinking about the physics of basketball. It just goes to show you that athletes are very intelligent people, they’re not just jocks, they’re very intelligent. Bill took it to another level in terms of understanding the science and the physics of the game and how to use the game to his advantage.”
Alex Belth in his introduction to his interview with Pollard summed the documentary and Russell up: “Bill Russell: Legend reminds us that in the world of team sports, the biggest team player of them all was also perhaps the most singular individualist, too.”
As the streaming wars rage on, consumers continue to be the clear winners with an abundance of series ripe for binging. See how your favorite episodics and limited series were brought to the screen with these hand-picked articles plucked from the NAB Amplify archives:
Adam McKay new docudrama for HBO, “Winning Time: The Rise of the Lakers Dynasty,” shows how the Lakers changed the way basketball is played.
February 19, 2023
Posted
February 18, 2023
Roundtable Discussion: How Will AI Impact Filmmakers and Other Creative Professionals?
Image created using generative AI
TL;DR
Innovations like ChatGPT and DALL·E 2 highlight the incredible advances that have taken place with AI, causing professionals in countless fields to wonder whether or not such innovations mean the end of thought leadership or if they should instead focus on the opportunities presented by such tools.
What do filmmakers and other creative professionals really think about these developments though? What are the top concerns, questions and viewpoints surrounding the recent surge of available AI generative technologies that have recently hit the open market?Should we be worried or simply embrace the technology and forge ahead and let the bodies fall in the wake?
“As the saying goes – with great power comes great responsibility, and sadly, I think that may not end well for many developers who can’t control the who/where/how the end users utilize these amazing technologies,” writes ProVideo Coalition (PVC) contributor Jeff Foster.
“AI has already generated huge legal and ethical issues that I suspect will only grow larger. But the genie is out of the bottle – indeed he or she emerged at the Big Bang itself – so let’s work together to figure out how to work with this fast-emerging reality to continue to be storytellers that speak to the human condition,” writes PVC contributor Mark Spencer.
What do filmmakers and other creative professionals really think about these developments though? What are the top concerns, questions and viewpoints surrounding the recent surge of available AI generative technologies that have recently hit the open market? Should we be worried or simply embrace the technology and forge ahead and let the bodies fall in the wake?
Below is how various PVC writers explored those answers in a conversation took shape over email. You can keep the discussion going in the comments section or on Twitter.
I’m definitely not unbiased as I’m currently engaging with as much of it on a user level as I can get my hands on (and have time to experiment with) and sort out the useful from the useless noise, so I can share my findings with the ProVideo community.
But with that said, I do see some lines being crossed where there may be legitimate concerns that producers and editors will have to keep in mind as we forge ahead and not paint ourselves into a corner – either legally or ethically.
Sure, most of the tools available out there are just testing the waters – especially with the AI image and animation generators. Some are getting really good (except for too many fingers and huge breasts) but when it gets indistinguishable from reality, we may see some pushback.
So the question arises that people generating AI images IN THE STYLE OF [noted artist] or PHOTOGRAPHED BY [noted photographer] if they are in fact infringing on those artists’ copyrights/styles or simply mimicking published works?
It is already being addressed in the legal system in a few lawsuits against certain AI tool developers that will eventually shake out how exactly their tools gather diffusion data it creates (it’s not just copy/paste) so that will either settle the direct copyright infringement argument against artists, or it will be a nail in the coffin for many developers and forbid further access to available online libraries.
The next identifiable technology that raises potential concern IMO are the AI tools that will regenerate facial imagery in film/video for the purpose of dubbing and ratings controls for possible misuse and misinformation.
On that note, I’ve mentioned ElevenLabs in my last article as a highly advanced TTS (Text To Speech) generator that not only allows you to customize and modify voices and speech patterns reading scripted text with astounding realism, but also lets you sample ANY recorded voice and then generate new voice recordings with your text inputs. For example, you could potentially used any A-list celebrity to say whatever marketing blurb you want in a VO or make a politician actually tell the truth (IT COULD HAPPEN!).
But if you could combine those last two technologies together, then we have a potential for a flood of misuse.
I’ve been actively using AI for a feature documentary I’ve been working on the past few years, and it’s made a huge difference on the 1100+ archival images I’ve retouched and enhanced, so I totally see the benefits for filmmakers already. It does add a lot of value to the finished piece and I’m seeing much cleaner productions in high-end feature docs these days.
As recently demonstrated, some powerful tools and (rather complex) workflows are being developed specifically for video & film, to benefit on-screen dubbing and translations without the need for subtitles. It’s only a matter of time before these tools are ready and available for use by the general public.
As the saying goes – with great power comes great responsibility, and sadly, I think that may not end well for many developers who can’t control the who/where/how the end users utilize these amazing technologies.
I am not sure we will see a sudden shift in the production process regarding AI and documentary filmmaking. There is something about being on location with a camera in hand, finding the emotional thread, and framing up to tell a good story. It is nearly impossible to replace the person holding the camera or directing the scene. I think the ability of a director or photographer to light a scene, light multi-camera interviews, and be with a subject through times of stress is irreplaceable.
Yet, AI can easily slip into the pre-production and post-production process for documentary filmmaking. For example, I already use Rev.com for its automatic transcription of interviews and captions. Any technology to make the process of collaborating and increasing the speed of the editing process will run through the post-production work like wildfire. I can remember when we paid production assistants to log reality tv footage. Not only did the transcription look tedious, but it was also expensive to pay for throughout the shoot. Any opportunity to save a production company money will be used.
Then we get to the type of documentary filmmaking that may require the recreation of scenes to tell the story of something that happened sometime before the documentary shoot. I could see documentary producers and editors turn to whatever AI tool to recreate a setting or scenes or even an influential person’s voice. The legal implications are profound, though, and I can see a waterfall of new laws giving notable people intellectual property to a family member’s former image and voice no matter how long ago they passed or at the very least 100 years of control of that image and voice. Whenever there is money to be made from a person’s image or voice, there will be bad actors and those who ask for forgiveness instead of permission, but I bet the legal system will eventually catch up and protect those who want it.
The rights issues are extremely knotty (I’ve recently written about this). On one hand, the extant claims that a trained AI contains “copies of images” are factually incorrect. The trained state of an AI such as Stable Diffusion, which is at the centre of recent legal action, is represented by something like the weights of interconnections in a neural networks, which is not image data. In fact, it’s notoriously difficult to interpret the internal state of a trained AI. Doing that is a major research topic, and our lack of understanding is why, for instance, it’s hard to show why an AI made a certain decision.
It could reasonably be said that the trained state of the AI contains something of the essence of an artist’s work and the artist might reasonably have rights in whatever that essence is. Worse, once an AI becomes capable of convincingly duplicating the style of an artist, probably the AI encompasses a bit more than just the essence of that artist’s work, and our inability to be specific about what that essence really is doesn’t change the fact that the artist really should have rights in it. What makes this really hard is that most jurisdictions do not allow people to copyright a style of artwork, so if a human artist learns how to duplicate someone else’s style, so long as they’re upfront about what they’re doing, that’s fine. What rubs people the wrong way is doing it with a machine which can easily learn to duplicate anyone’s work, or everyone’s work, and which can then flood the market with images in that style which might realistically begin to affect the original artist’s work.
In a wider sense this interacts with the broad issues of employment in general falling off in the face of AI, which is a society-level issue that needs to be addressed. Less skilled work might go first, although perhaps not – the AI can cut a show, but it can’t repair the burst water main without more robotics than we currently have. One big issue coming up, which probably doesn’t even need AI, is self-driving vehicles. Driving is a massive employer. No plans have been made for the mass unemployment that’s going to cause. Reasonable responses might include universal basic income but that’s going to require some quite big thinking economically, and the idea that only certain, hard-to-automate professions have to get up and go to work in the morning is not likely to lead to a contented society.
This is just one of a lot of issues workers might have with AI and so the recent legal action might be seen as an early skirmish in what could be a quite significant war. I think Brian’s right about this not creating sudden shifts in most areas of production. To some extent the film and TV industry already does a lot of things it doesn’t really need to do, such as shooting things on 65mm negative. People do these things because it tickles them. It’s art. That’s not to say there might not likely be pressures to use more efficient techniques when they are available, as has been the case with photochemical film, and that will create another tension (as if there aren’t already a lot) between “show” and “business”. As a species we tend to be blindsided by this sort of thing more than we really should be. We tend to assume things won’t change. Things change.
I do think that certain types of AI information might end up being used to guide decision-making. For instance, it’s quite plausible to imagine NLE software gaining analysis tools which might create the same sort of results that test screenings would. Whether that’s good or not depends how we use this stuff. Smart application of it might be great. Allowing it to become a slave driver might be a disaster, and I think we can all imagine that latter circumstance arising as producers get nervous.
While AI has a lot to offer, and will cause a great deal of change in our field and across society, I don’t think it’ll cause broad, sweeping changes just yet. Artificial Intelligence has been expected to be the next big thing for decades now, and (finally!) some recent breakthroughs are starting to have a more obvious impact. Yet, though ChatGPT, Stable Diffusion, Dalle and Midjourney can be very impressive, they can also fail badly.
ChatGPT seems really smart, but if you ask it about a specialist subject that you know well, it’s likely to come up short. What’s worse than ChatGPT not knowing the answer? Failing to admit it, but instead guessing wrong while sounding confident. Just for fun, I asked it “Who wrote Final Cut Pro Efficient Editing” because that’s the modern equivalent of Googling yourself, right? It’s now told me that both Jeff Greenberg and Michael Wohl wrote the book I wrote in 2020, and I’m not as impressed as I once was.
Don’t get me wrong: if you’re looking for a surface level answer, or something that’s been heavily discussed online, you can get lucky. It can certainly write the script for a very short, cheesy film. (Here’s one it wrote: https://vimeo.com/795582404/b948634f34.) Lazy students are going to love it, but it remains to be seen if it’s really going to change the way we write. My suspicion is that it’ll be used for a lot of low-value content, as AI-based generators like Jasper are already used today, but the higher-value jobs will still go to humans. And that’s a general theme.
Yes, there will be post-production jobs (rotoscoping, transcription) done by humans today which will be heavily AI-assisted tomorrow. Tools like Keyper can mask humans in realtime, WhisperAI does a spectacular job of transcription on your own computer, and there are a host of AI-based tools like Runway which can do amazing tricks. These tasks are mostly technical, though, and decent AI art is something novel. Image generators can create impressive results, albeit with many failures, too many fingers, and lingering ethical and copyright issues. But I don’t think any of these tools are going away now. Technology always disrupts, but we adapt and find a new normal. Some succeed, some fail.
A saving grace is that it’s easy to get an AI model about 95% of the way there, but, the last 5% gets a bit harder, and the final 1% is nearly impossible. Now sometimes that 5% doesn’t matter — a voice recording that’s 95% better is still way better, and a transcription that’s nearly right is easy to clean up. But a roto job where someone’s ears keep flicking in and out of existence is not a roto job the client will accept, and it’s not necessarily something that can be easily amended.
So, if AI is imperfect, it won’t totally replace humans at all the jobs we’re doing today. Many will be displaced, but we’ll get new jobs too. AI will certainly make it into consumer products, where people don’t care if a result is perfect, but to be part of a professional workflow, it’s got to be reliable and editable. There are parallels in other creative fields, too: after all, graphic designers still have a livelihood despite the web-based templated design tool Canva. Yes, Canva took away a lot of boring small jobs, but it doesn’t scale to an annual report or follow brand guidelines. The same amount of good work is being done by the same number of professionals, and there’s a lot more party invitations that look a little better.
For video, there will be a lot more AI-based phone apps that will perform amazing gimmicks. More and better TikTok filters too. There will also be better professional tools that will make our jobs easier and some things a lot quicker — and some, like the voice generation and cleanup tools, will find fans across the creative world. Still, we are a long, long way from clients just asking Siri 2.0 to make their videos for them.
Beyond video, the imperfection of AI is going to heavily delay any society-wide move to self-driving cars. The world is too unpredictable, my Tesla still likes to brake for parked cars on bends, and to move beyond “driver assistance”, self-driving tech has to be perfect. A capability to deal with 99.9999% of situations is not enough if that 0.00001% kills someone. There have been some self-driving successes where the environment is more carefully mapped and controlled, but a general solution is still a way off. That said, I wouldn’t be surprised to see self-driving trucks limited to predictable highway runs arrive soon. And yes, that will put some people out of work.
So what to do? Stay agile, be ready for change. There’s nothing more certain than change. And always remember, as William Gibson said: “The Future is Already Here, it’s Just Not Very Evenly Distributed.”
AI audio tools keep growing. Some that come to mind are Accusonus ERA (currently being bought), Adobe Speech Enhancement, AI Mastering, AudioDenoise, Audo.ai, Auphonic, Descript, Dolby.io, Izotope RX, Krisp, Murf AI Studio, Veed.io and AudioAlter. Of those, I have personally tested Accusonus ERA, Adobe Speech Enhancement, Auphonic, Descript and Izotope RX6.
I have published articles or reviews about a few of those in ProVideo Coalition.
There’s a lot of use of AI and “smart” tools in the audio space. I often think a lot of it is really just snake oil – using “AI” as a marketing term. But in any case, there are some cool products that get you to a solid starting point quickly.
Unfortunately, Accusonus is gone and has seemingly been bought by Meta/Facebook. If not directly bought, then they’ve gone into internal development for Facebook and are no longer making retail plug-ins.
In terms of advanced audio tools, Sonible is making some of the best new plug-ins. Another tool to look at is Adobe’s Podcast application, which is going into public beta. Their voice enhancement feature is available to be used now through the website. Processing is handled in the cloud without any user control. You have to take or leave the results, without any ability to edit them or set preferences.
AI and Machine Learning tools offer some interesting possibilities, but they all suffer from two biases. The first is the bias of the developers and the libraries used to train the software. In some cases that will be personal biases and in others it will be the biases of the available resources. Plenty has been written about the accuracy of dog images versus cat images created by AI tools. Or that of facial recognition flaws with darker skin, including tattoos.
The second large bias is one of recency – mainly the internet. More general and specific data is available from the last 10-20 years using internet resources, than prior. If you want to find niche information prior to the advent of the internet, let’s say before 1985, then it can be a very difficult search. That won’t be something AI will likely access. For example, if you tried to have AI mimic the exact way that Cinedco’s Ediflex software and UI worked, I doubt it would happen, because the available internet data is sparse and it’s so niche.
I think the current state of the software is getting close enough to fool many people and could probably pass the famous Turing test criteria. However, it’s still derivative. AI can take A+B and create C or maybe D and E. What it can’t do today (and maybe never), is take A+B and create K in the style of P and Q with a touch of Z. At least not without some clear guidance to do so. This is the realm of artists to be able to make completely unexpected jumps in the thought process. So maybe we will always be stuck in that 95% realm and the last 1-5% will always be another 5 years out.
Another major flaw in AI and Machine Learning – in spite of the name – is that it does not “learn” based on user training. For instance, Pixelmator Pro uses image recognition to name layers. If I drag in a photo of the Eiffel Tower it will label it generically as tower or building. If I then correct that layer name by changing it to Eiffel Tower, the software does nothing to “learn” from my correction. The next time I drag in the same image, it still gets a generic name, based on shape recognition. So there’s no iterative process of “training” the library files that the software is based on.
I do think that AI will be a good assistant in many cases, but it won’t be perfect. Rotoscoping will still require human finesse (at least for a while). When I do interviews for articles, I record them via Skype or Zoom and then use speech-to-text to create a transcript. From that I will write the article, cleaning up the conversation as needed. Since the software is trying to create a faithful transcription to what the speaker said, I often find that the clean-up effort takes more time and care than if I’d simply listened to the audio and transcribed it myself, editing as it went along. So AI is not always a time-saver.
There are certainly legal questions. At what point is an AI-generated image an outright forgery? How will college professors know whether the student’s paper is original versus something created through ChatGPT? I heard yesterday that actual handwriting is being pushed in some schools again, precisely because of such concerns (along with the general need to have legible writing). Certainly challenging ethical times ahead.
I think that in the world of film we have a bit of breathing room when it comes to advances in AI bringing significant changes and perhaps a bit of an early warning of what might be to come. Our AI tools are largely technical rather than creative, and the creative ones less well developed compared to the image and text creation tools, so they don’t yet pose much of a challenge to our livelihoods and the legal issues aren’t as complicated. For example, AI noise reduction or upscaling – they are effectively fixing our mistakes – and there isn’t much need for the models to be trained on data they might not have legal access to (though I imagine behind the scenes this is an important topic for them, as getting access to high quality training data would improve their product).
I see friends who are writers or artists battling to deal with the sudden changes in the AI landscape. I know copywriters whose clients are asking them if they can’t just use ChatGPT now to save them money or others saying their original writing has been falsely flagged as AI-generated by an AI analysis tool and while I’m sure the irony is not lost on them, it doesn’t lessen their stress. So in terms of livelihoods and employment I think there are real ethical issues, though I have no idea how they can be solved, aside from trusting that creative people will always adapt, though that takes time and the suddenness of all this has been hard for many.
On the legal side, I feel like there is a massive amount of catching up to do and it will be fascinating to see how these current cases work out. It feels like we need a whole new set of legal precedents to deal with emerging AI tools, aside from just what training data the models can access. Looking at the example of deepfakes, I love what a talented comedian and voice impersonator like Charlie Hopkinson can do with it – I love watching Gandalf or Obi-Wan roasting their own shows – but every time I watch, I wonder what Sir Ian McKellen would think – though somehow I think he would take it quite well. Charlie does put a brief disclaimer on the videos, but that doesn’t feel enough to me. I would have thought the bare minimum would be a permanent disclaimer watermark, let alone a signed permission from the owner of that face! I think YouTube has put some work into this, focusing more on the political or the even less savoury uses, which of course are more important, but more needs to be done.
I think we in the worlds of production and post would be wise to keep an eye on all the changes happening so we can stay ahead and make them work to our advantage.
I have been experiencing a sense of excitement and wonderment over the most recent developments in AI.
It’s accelerating. And at the same time, I’m cynical – I’ve read/watched exciting research (sometimes from SIGGRAPH, sometimes from some smaller projects) that never seems to see the light of day.
About six years ago, I did some consulting work around machine learning and have felt like a child in a candy store, discovering something new and fascinating around every corner.
Am I worried about AI from a professional standpoint?
Nope. Not until they can handle clients.
If the chatbots I encounter are any indicators? It’s going to be a while.
For post-production? It’s frustrating when the tools don’t work. Because there’s no workaround that will fix it when it fails.
ChatGPT is an excellent example of this. It’s correct (passing the bar, passing the MCAT), until it’s confidentially incorrect. It gave me answers that just don’t exist/aren’t possible. How is someone to evaluate this?
If you use ChatGPT as your lawyer, and it’s wrong, where does the liability live?
That’s the key in many aspects – it needs guidance, a professional who knows what they’re doing.
In creating something from nothing: There are a couple of areas that are in the crosshairs
Text2image. That works sorta well. The video is a little harder.
Music generation. I totally expect this to be a legal nightmare. When the AI generates something close to an existing set of chords, who (if anyone) gets a payment? If you use it in your video, who owns the rights to that synthetic music
Speech generation. We’ve been cloning voice decently (see Descript’s lyrebird and the newer Elevenlabs voice synthesis). Elevenlabs has at least priced it heavily – suddenly, audiobook generation with different voices for different characters will make it more difficult to make a living as a voice artist.
Deepfakes. It’s still a long way from easy face replacement.
These tools excite me most in the functional areas instead of the “from scratch” perspective.
Taking painful things and reducing the difficulty.
That’s what good tools should always do, especially when they leave the artist the ability to influence the guidance.
OpenAI’s Whisper really beats the pants off other speech-to-text tools. I’m dying just to edit the text. Descript does this, which is close to what I want.
Colourlab.ai‘s matching models – 100% what I’m talking about. Different matching models, a quick pick, and you’re on your way. (Disclaimer, I do some work for Colourlab.
Adobe’s Remix is a great example of this. It’s totally workable for nearly anyone and is like magic. It takes this painful act of splicing music to itself (shorter or longer) and makes it easy.
The brightest future.
You film an interview. You read the text, clean it up, and tell a great story.
Except there’s an issue – something is unclear in the statements made. You get clearance from the interviewee about the rephrasing of a statement. Then use an AI voice model of their voice to form the words. And another to re-animate the lips to look like the subject said it.
This is “almost here.”
The dark version of this?
It’s society-level scary (but so are auto-driving cars that can’t really recognize children, which one automaker is struggling with.)
Here’s a scary version: You get a phone call, and you think it’s a parent or significant other. It’s not. It’s a cloned voice and something like ChatGPT trained on content that can actually respond in near-real time. I’ll leave the “creepy” factor up to you here.
Ethical ramifications
Jeff Foster brings up this question – what happens when we can convincingly make people say what we want?
At some level, we’ve had that power for over a decade. Just the fact that we could take a word out of someone’s interview gives us that power. It’ll just make that easier/more accessible. As well as “I didn’t say that; it was AI” being a defense.
It’s going to be ugly because our lawmakers, and our judicial system, can’t educate themselves quickly enough if the past is any indication.
Generative AI’s isn’t “one-click”
As Iain pointed out in the script he had ChatGPT write, it did the job, it found the format, but it wasn’t very good.
I wonder how it would help me around writer’s block?
Generative text is pretty scary – and may disrupt Google.
Since Google is based on inbound/outbound links – it’s going to be very soon that the blog spam will explode even more, and it’ll be harder to tell what content is well written and what is not.
Unless it comes from a specific person you trust.
And as Oliver pointed out, it’s problematic until I can train it with my data – it needs an artist.
The lack of being able to re-train will mean that failures will consistently fail. Then we’re in workaround hell.
Personally I believe that AI technologies are going to cause absolutely massive disruption not just to the production and post-production industries, but across the entire gamut of human activity in ways we can’t even imagine.
In the broadest sense, the course of evolution has been one of increasing complexity, often with exponential jumps (e.g., Big Bang, Cambrian explosion, Industrial Revolution). AI is a vehicle for another exponential leap. It is extraordinarily exciting and terrifying, fraught with danger, yet it will also create huge new opportunities.
How do we position ourselves to benefit from, or at least survive, this next revolution?
I’d suggest moving away from any task or process that AI is likely to take over in the short term. Our focus should be on what humans (currently) do better than AI. Billy Oppenheimer, in his article on The Coffee Cup Theory of AI, calls this Taste and Discernment. Your ability to connect to other humans through your storytelling, to tell the difference between the great and the good, to choose the line of dialog, the lighting, the composition, the character, the blocking, the take, the edit, the sound design…and use AI along the way to create all the scenarios from which you use your developed sense of taste to discern what will connect with an audience.
AI has already generated huge legal and ethical issues that I suspect will only grow larger. But the genie is out of the bottle – indeed he or she emerged at the Big Bang itself – so let’s work together to figure out how to work with this fast-emerging reality to continue to be storytellers that speak to the human condition.
(These words written by me with no AI assistance :-))
AI-generated art has advanced by leaps and bounds, but studios and audiences alike might not yet be ready for its Hollywood closeup.
February 17, 2023
Posted
February 17, 2023
What Does Susan Wojcicki’s Exit Mean For YouTube?
BY JIM LOUDERBACK
TL;DR
With Susan Wojcicki stepping back at YouTube, it’s certainly the end of an era. But I’d rather look forward than back.
YouTube’s new leader Neal Mohan has led product at YouTube since 2015, but he arrived at Google with the DoubleClick acquisition in 2008.
The new YouTube chief has an impressive background in strategy, operations and product – all essential to chart the future path of YouTube.
With Susan Wojcicki stepping back at YouTube, it’s certainly the end of an era. But I’d rather look forward than back. For insight into what it means for YouTube, study her replacement, Neal Mohan.
First off, I am a Mohan fan. He’s been a regular speaker at VidCon and I’ve always been appreciative of his expertise and insight into YouTube’s future product direction.
Mohan has led product at YouTube since 2015, but he arrived at Google with the DoubleClick acquisition in 2008. While at DoubleClick he grew and ran the business, ultimately running strategy and leading the sale to Google.
Susan Wojcicki cr: Google
At Google he built the display and video ad business and grew it to the industry leader it is today. Interestingly Mohan also worked as a strategy intern at Microsoft while getting his Stanford MBA.
The new YouTube chief has an impressive background in strategy, operations and product – all essential to chart the future path of YouTube. His strong strategy expertise should lead to a heightened focus on the opportunities for YouTube to continue to become the global uber-video app, while also leaning into more ways for YouTube to grow revenue for creators and ultimately for Alphabet.
The tide has shifted at YouTube, with first Robert Kyncl and now Wojcicki out as senior leaders. Tellingly, it seems Mohan won’t take on the “CEO” role but will remain as senior VP while becoming head of YouTube.
I’ve frequently referenced interviews and articles about and by Mohan in my “Inside the Creator Economy” newsletter. Over the last few years each February Mohan penned an annual look at upcoming product, tools and features on the Inside YouTube blog.
The cultural impact a creator has is already surpassing that of traditional media, but there’s still a stark imbalance of power between proprietary platforms and the creators who use them. Discover what it takes to stay ahead of the game with these fresh insights hand-picked from the NAB Amplify archives:
South Korea’s Synthetic Pop Stars: What Do “They” Mean for the Metaverse?
Virtual idol group MAVE
TL;DR
South Korea is the world’s testing ground for tech, so when K-pop singers compete in a virtual universe what does this tell us about the future of entertainment?
The popularity of synthetic pop stars in South Korea may be peculiar to that culture. Or is it?
Could the merger of virtual with the real create a new genre of content?
With its highly digital and device literate, young and ultra competitive society, South Korea is looked on as the world’s petri-dish for future media. The current vogue for K-pop stars who use avatars or the popularity of entirely virtual singers and influencers means the country is one to watch.
South Korean tech company Kakao Entertainment, for example, is billing Mave, its artificial band, as the first K-pop group created entirely within the metaverse. It is using machine learning, deep fake, face swap and 3D production technology.
To give them global appeal, the company wants the “girls” of Mave to eventually be able to converse in, say, Portuguese with a Brazilian fan and Mandarin with someone in Taiwan, fluently and convincingly.
The idea, the project’s technical director, Kang Sung-ku, tells Jin Yu Young and Matt Stevens at The New York Times, is that once such virtual beings can simulate meaningful conversations, “no real human will ever be lonely.”
Kakao also runs the virtual world called “Weverse” or simply “W.” In part of this world there’s a game show called Girl’s Re:verse that features 30 singers, eliminated over time, until the last five standing form a band. All are members of established K-pop bands or solo artists. But they are all masked as animated avatars.
Strictly speaking, this is not a metaverse, says the NYT. They are instead proprietary platforms users have to log in to, accepting terms of service and with no sign of any cryptographic features.
But the complete blurring of the virtual with the real is surely one core trait of what will become the metaverse.
Cr: “Avatar Singer”
Cr: “Avatar Singer”
Cr: “Avatar Singer”
Cr: “Avatar Singer”
Another example is a TV reality show, not dissimilar from The Masked Singer, but with a difference. Avatar Singer is a 15-episode music competition that ran live on Korean TV channel MBN . It features celebrity competitors masked as digital 3D avatars complete with superpowers.
As explained by one of the vendors behind the project, the show live motion capture, facial capture, Unreal Engine, and augmented reality. These enabled the team to “expand the conventional stage into an evolved universe.”
Compared with their Korean counterparts, media companies in the United States have only engaged in “light experimentation” with the metaverse so far, Andrew Wallenstein, president and chief media analyst of VarietyIntelligence Platform, tells The New York Times.
Countries like South Korea “are often looked at like a test bed for how the future is going to pan out,” Wallenstein said. “If any trend is going to move from overseas to the US, I would put South Korea at the front of the line in terms of who is likeliest to be that springboard.”
Already Korean “virtual influencers” like Rozy have Instagram followings in six figures and promote real brands like Chevrolet and Gucci.
“We want to create a new genre of content,” said Baik Seung-yup, Rozy’s creator, who estimates that about 70% of the world’s virtual influencers are Korean.
“From a Western perspective, it can seem strange,” Enrique Dans writes in a blog post on Medium. “The [virtual pop] groups all look pretty similar (the manga-style avatars have huge eyes and heart-shaped faces), and are deeply rooted in the cultural codes of the country’s youth.”
He adds, “Young Koreans follow their favorite bands, attend concerts, and celebrate their bands’ rise to popularity as a reflection of their competitive society, where they must gain access to certain schools and universities if they want to find a good job.”
Son Su-jung, a producer for the show, also says that part of the point was to give K-pop singers — “idols,” as they are called — a break from the industry’s relentless beauty standards, letting them be judged by their talent, not their looks.
“Idols in the real world are expected to be a product of perfection, but we hope that through this show, they can let go of those pressures,” she said.
The metaverse may be a wild frontier, but here at NAB Amplify we’ve got you covered! Hand-selected from our archives, here are some of the essential insights you’ll need to expand your knowledge base and confidently explore the new horizons ahead:
Synthetic media, sometimes referred to as “deepfake” technology, is already impacting the creative process for artists (and non-artists).
February 15, 2023
The ‘70s-Inspired Visuals of Benjamin Caron’s “Sharper”
TL;DR
For his debut feature “Sharper,” director Benjamin Caron wanted cinematographer Charlotte Bruus Christensen to be the “Princess of Darkness” in homage to cinematographer Gordon Willis.
Willis famously shot “The Godfather,” “Klute” and other movies in next to no light; in the case of “The Godfather” that creative choice was driven by Marlon Brando’s makeup.
“Sharper” is a grifter movie that revels in the use of shadows and underexposed long takes.
Prior to “Sharper,” Caron had notable success directing episodes of “The Crown” and Disney’s Star Wars episodic “Andor.”
Not knowing what will happen is the ultimate tease for a grifter movie like Sharper — the darkness just adds to the mystery
The British director of Sharper, now streaming on Apple TV+, wanted his DP Charlotte Bruus Christensen to become the “Princess of Darkness” in homage to cinematographer Gordon Willis, who famously shot The Godfather and other movies in next to no light.
Rather obviously, Vanity Fair’s Richard Lawson takes a romantic view of using film, unkindly describing the digital alternative’s look as “the plastic dullness of a toss-off digital Netflix thriller.” With Bruus Christensen’s film aesthetic, however, he warmly welcomed “the grain and light of what movies used to look like.”
John Lithgow and Julianne Moore in “Sharper.” Cr: Apple TV+
Julianne Moore in “Sharper,” cr: AppleTV+
Julianne Moore and Justice Smith in “Sharper.” Cr: Apple TV+
Julianne Moore in “Sharper,” CR: AppleTV+
Briana Middleton and Justice Smith in “Sharper.” Cr: Apple TV+
Sebastian Stan, Julianne Moore and John Lithgow in “Sharper.” Cr: Apple TV+
Sebastian Stan in “Sharper,” CR: AppleTV+
Justice Smith, Briana Middleton in “Sharper.” Cr: Apple TV+
Justice Smith in “Sharper.” Cr: Apple TV+
John Lithgow, Sebastian in “Sharper,” Cr: AppleTV+
John Lithgow in “Sharper,” CR: AppleTV+
Sebastian Stan, Briana Middleton in “Sharper.” Cr: Apple TV+
Briana Middleton in “Sharper.” Cr: Apple TV+
Julianne Moore in “Sharper,” CR: AppleTV+
In truth, Willis’ approach to lighting — particularly in the initial scene of The Godfather — occurred to him only at the last minute as a means to counter the strange makeup Marlon Brando was using. Just 20 minutes prior to the shoot, the only technique he could think of was to use a top light. Ultimately, this decision sealed the look of the movie from that point on. But maybe the die was already cast with his moody aesthetic for Klute, which he shot the year before, in 1971.
But Lawson’s coupling of the use of film with an old-fashioned con artist tale is understandable, clumsy as it might be, as Sharper is a thriller that revels in the use of shadows and underexposed long takes.
The director, Benjamin Caron, was new to feature films but had notable success in directing episodes of The Crown and Disney’s brilliant Star Wars episodic Andor. But for Sharper, he had asked Bruus Christensen “to think about these sophisticated compositions of using light and darkness,” as he told SlashFilm’s Ben Pearson. “But probably one of the biggest reference points for me was Klute. There was just something about the atmosphere of that film that I’ve always loved.”
Describing Willis’ work, Caron says, “He just basically infused every frame with meaning and atmosphere, and there was a beautiful delicacy to it. So it was a heavy leaning into the feeling of that film.” (As an aside, this 1971 film has been having a hell of a cultural resurgence as of late, BJ Colanelo notes at SlashFilm, with director Matt Reeves also citing the film as a massive influence on The Batman.)
Caron also referenced The Color of Money, Drive and especially Fincher’s Seven. “What I loved about that film is that you were so claustrophobic for such a long period of time. You were held in that city. It was all mainly shot at night and it was rain, but then right at the very end of the film, you suddenly had this big desert expanse where there was nothing else.”
He could see that same scenario working for Sharper, he told Pearson. “We had all these characters penned into Manhattan, where the sight lines are limited and you can rarely see the horizon. But then, as in Seven, I love the end where suddenly you’re in this open space where you can see nothing but sky, and ultimately the characters have nowhere to hide.”
Julianne Moore in “Sharper,” CR: AppleTV+
Apple’s own description of Sharper does harken back to thrillers of the past: “No one is who they seem. A neo-noir thriller of secrets and lies, set amongst New York City’s bedrooms, barrooms and boardrooms. Characters compete for riches and power in a high stakes game of ambition, greed, lust and jealousy that will keep audiences guessing until the final moment.”
Pete Hammond’s review of the film for Deadline describes the pull of this new swindler story. “Seeing the nifty grifter drama Sharper reminded me how rarely we encounter this kind of clever cat-and-mouse game that might fall into the noirish genre but really relies on diving into a world filled with characters who reveal slices of their lives that keep changing moment to moment,” he writes.
“It is the kind of movie I find enormously difficult to review because its ultimate success for a viewer is just watching it unfold, beat by beat, never quite knowing exactly where it is heading but still glued to the screen to find out,” Hammond continues.
“Written in a non-linear style and separated by chapters identified on the screen with character’s names, the focus keeps changing as we see events unfold, and eventually intertwine, as the story takes twists and turns and then twists right back again.”
Julianne Moore in “Sharper,” CR: AppleTV+
Sebastian Stan, Julianne Moore in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Sebastian Stan, John Lithgow in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Briana Middleton in “Sharper.” Cr: Apple TV+
John Lithgow in “Sharper,” CR: AppleTV+
Sebastian Stan, John Lithgow in “Sharper,” CR: AppleTV+
Justice Smith, Briana Middleton in “Sharper.” Cr: Apple TV+
But it is director Caron, in his first feature, who kept the lid on what the characters were thinking, not wanting to clue the audience into the deceit. “Deception is definitely the defining feature of this film, and I’m always interested in character’s motivations and how people talk or flirt or lie or impersonate in terms of getting what they want,” he said to Pearson.
“I thought it was really important in this film that we never had a nod and a wink to the audience at any moment that something was about to happen. Sometimes I think there’s a tendency, whether it be from the storyteller or even from the performer, to show too much.
“And I think right from the very beginning, even in conversations with the actors, we wanted to hold all of that back. Because I really remember reading the script and I really remember those moments where I was floored and I was genuinely shocked and surprised. So it was really important they held onto that integrity.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Cinematographer Felix Wiedemann uses the ARRI Alexa LF to create a naturalistic look for Netflix’s hit psychological thriller series.
February 7, 2023
“Kendrick Lamar Live in Paris” Brings Cinematic Production to a Streamed Event
TL;DR
The video production of the recent Kendrick Lamar concert in Paris employed multiple digital cinema cameras in a livestreamed outdoor broadcast.
The production relied heavily on Sony equipment, including the company’s digital cine flagship Venice camera in both Super 35 and full-frame 6K configurations.
Other equipment included an ARRI Trinity rig spanning the area from the stage to the floor, a spidercam, and a robotic rail-cam system “that acted like a sniper,” able to boom up and boom down precisely while maintaining a beautiful frame above stage height.
Camera technology that started out in the upper echelons of cinema have now become so accessible that the use of digital cine cameras and lenses is being use to photograph sports and music concerts too.
Normally, such cameras are used sparingly for cinematic depth of field cut-aways in live sports or in glossily post produced video concert footage.
The video production of the recent Kendrick Larmar tour took this to another level by using multiple digital cinema cameras in a livestreamed outside broadcast.
Perhaps that isn’t surprising given an artist of Lamar’s caliber. The Big Steppers: Live From Paris, part of Lamar’s “Big Steppers Tour,” was streamed live exclusively on Amazon Music and Prime Video from the Accor Arena in Paris this past October.
Kendrick Lamar’s The Big Steppers Tour LIVE from Paris.
“We didn’t want to just use a prefab camera plot,” Ritchie explains. “We really wanted to understand what would be dynamic, what would be a great storytelling device, what lenses would feel more immersive versus objective.”
The amount of technology used for the shoot was astonishing, as detailed in the Sony case study. An ARRI Trinity went from the stage to the floor for specifically choreographed moments. Two additional Steadicams, one on stage for fluid live moments, and one in the audience, captured moments with fans. They had a robotic rail-cam system “that acted like a sniper,” able to boom up and boom down precisely while maintaining a beautiful frame above stage height.
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
They also had a spidercam for very specific cinematic moments, a 25-foot tower camera and Technocrane gliding slowly over the audience that captured waves of hands as it made its way to the stage.
Principal photography was from 16 Sony Venice cameras and Sony’s new cinematic pan-tilt-zoom camera, the FR7.
Ritchie used the Venice at 6K in full-frame, along with lenses like Signature Zooms or Fujinon Premistas and primes.
“The beauty of full-frame is you can see a nice wide shot of a stadium or an arena, but stay focused on the person right there in front of you,” he said. “To be able to control someone’s attention with more shallow depth of field in certain moments is critical to the narrative. I can show you 80,000 people and a massive stage, and by using a shallow depth of field I can ensure the audience stays laser focused on the artist while still offering an epic sense of depth and grandeur.”
He also used Venice in Super 35 mode, allowing him to employ longer cinema zooms and converted broadcast lenses that can offer both tight and wide coverage from all angles.
“One of the biggest challenges in live spaces is distance to the subject,” says Ritchie. “Feature films happen between eight and 20 feet. However, it’s often challenging to maintain the inner ring of close coverage in a live space, especially when you have massive stages and catwalks in excess 120 feet, while trying not to impede on the audience experience. Having that second ring of coverage is crucial to maintain coverage throughout the film.”
Live Grade LUTs were applied, adjusting exposure and black levels and accounting for any variances between lenses and the environment, which as you can imagine means battling with constantly changing extreme contrasts, bright LED screens, and highly saturated lighting.
“We’re doing that with 16 to 20 cameras in the live space where every one of these needs to be as close to perfect as possible,” adds Ritchie.
“When you’re shooting for a film, you have the luxury of time and an edit. You can just shoot Log and tweak the exposure and color later. But in the live space it’s real-time. In line LUT boxes apply our base look and our truck RCPs control Iris as well as subtle variances between cameras. The cinematographer, DITs, LD and video engineers are all working in perfect sync, safeguarding the image through every crucial step.”
Donald and Stephen Glover bring season 3 of “Atlanta” to FX and Hulu in what critics call “a true American masterpiece.”
February 7, 2023
Creating the Lo-Fi, VHS-Vibe Visuals for “Skinamarink”
TL;DR
In a world of crystal-clear 4K smartphone videography, the detuned aesthetic of indie horror feature “Skinamarink” is even more distinct.
Working under a no-budget budget of just $15,000, writer-director-editor Kyle Edward Ball found that micro-budget limitations fueled his creative vision.
Ball used his short film “Heck” to develop a technique the indie filmmaker calls “filming by implication.”
This technique demanded a set of steadfast rules: “We never see someone’s face. We avoid showing people on screen for too long. Whatever dialogue is delivered is always delivered off-screen. We never go outside. We never leave the house.”
The trailer for Skinamarink shows just how much work was involved in making the indie horror film look so bad. In this world of crystal-clear 4K smartphone videography, a detuned aesthetic is even more distinct and perhaps welcome. Writer-director-editor Kyle Edward Ball absorbs any sense of clarity out of his movie, visually or psychologically. This could be down to the no-budget budget (which reached a final tally of $15,000), but in fact proclaims the skills of the indie filmmaker and his small crew.
Skinamarink has been acquired by horror streamer Shudder and is currently in theaters via IFC Midnight. It will debut on Shudder later in 2023.
FilmmakerMagazine’s Natalia Keogan describes the incredibly loose narrative. “It follows young siblings Kevin (Lucas Paul) and Kaylee (Dali Rose Tetreault) as they patter around their family’s strikingly ordinary middle-class house in the dead of night circa 1995,” she writes.
“Their parents are nowhere to be found, all of the doors have mysteriously vanished and the lights eventually stop working. While this phenomena is enough to chill any child, their well-being is most threatened by a supernatural presence that beckons the siblings to obey increasingly disturbing requests.”
Keogan’s description continues in nightmarish terms: “Skinamarink does not rely on typical genre conventions, barely even showing the protagonists in full, opting for shots of disjointed limbs and obscured faces. The film’s bone-numbing terror comes from somewhere deeper and more genuine than a cheap jump-scare, like an early childhood nightmare extracted from our collective subconscious, transferred to a VHS tape and screened on an old CRT television set at three a.m.”
In his interview with Ball for RogerEbert, Isaac Feldberg was keen for the filmmaker to unveil his production techniques. “Ball found that micro-budget limitations fueled his creative vision, necessitating all manner of trick photography and unconventional angles to mimic a child’s-eye view.”
His short film Heck was a proof-of-concept exercise for what was to come. “Through doing my YouTube series, I developed a technique of filming by implication, instead of showing. So, instead of showing actors, I was doing point-of-view shots or filming different parts of the room while we had audio off-screen. And, after a while, I thought, ‘Maybe I could do a feature like this…’”
Ball also detailed some of his steadfast shooting and framing rules, a practice not uncommon in episodics. “I set rules in place that I wasn’t allowed to break. We never see someone’s face. We avoid showing people on screen for too long. Whatever dialogue is delivered is always delivered off-screen. We never go outside. We never leave the house. We’re always in the house.”
With those visual constraints in place Ball looked to the audio to seal the horror. “I didn’t just want Skinamarink to look like an old movie,” he told Feldberg. “I wanted it to feel and sound like one. I wanted to go really [hard] with that. I didn’t just want to make the dialogue sound like it was recorded on an old microphone. I wanted the audio to feel like an old, scratched-up re-taping of a film that wasn’t preserved from the ‘70s — lots of hiss, lots of hum.”
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Ball’s idea for the visuals was to shoot as near to darkness as possible and then grade to further distress what he shot with his DP Jamie McCrae. He explained to Lex Briscuso at Inverse how they created the look. “When I was doing my YouTube channel, I was also gravitating toward the lo-fi look. I thought ‘Why can’t I make a movie like it’s from the ‘70s? Or the ‘50s? The ‘30s?’ It evolved into, ‘What if I did an entire movie in this style?” So I started writing my script,’” he said.
“Working with my amazing director of photography, Jamie McCrae, I said, ‘OK let’s get a really good camera that’s really good in low light and see if we can just use practicals.’ I set some rules for myself. We can only use practical lights: flashlights, light coming off a TV, a lamp.”
Another big issue were the scenes set in pitch black, Ball told Briscuso. “Obviously, we couldn’t shoot 100% pitch black unless we used infrared, so we developed this technique of putting a sun gun on top of the camera, putting a blue filter over it, and grading with it,” he said.
McCrae selected the Sony FX6 as the main camera, Ball recounted, adding, “I forget what lenses we used, but the great thing about a modern digital camera — and that one in particular — is that it almost sees in the dark, almost better than the human eye, with somewhat minimal artifacting or grain.”
But when Ball reached the post-production stage, he discovered that he couldn’t edit the film and then age the material after the fact. “I had to do it in tandem,” he told Briscuso. “The mood is so intrinsically tied to the lo-fi aspect of it that it was impossible. So I did it step by step; that’s really why the editing took four months.”
To make the footage appear old, Ball employed a package of 16mm film grain overlays he already had on hand. “In editing, I picked different overlays, graded and played with the levels shot-by-shot, and I just did that until it looked right and read well. It wasn’t just one overlay I looped a hundred times. I took my time to make sure there were enough varieties, so you didn’t subconsciously say, ‘Oh, I’ve seen this overlay before.’
“As far as the special effects, a lot of it was just simple old Hollywood tricks that you can get away with if you’re using a layer of grain over it. There’s a few parts where things appear on the ceiling, floating. That was literally just me holding it up and photoshopping myself out. The doors and windows, I just Content Awared them out.”
Sam Theilman‘s review of the movie for Slate is perhaps the most discerning, “I think Skinamarink is the first movie I’ve seen that is shot in such a way as to show only what its child protagonist can understand. I can’t imagine another film doing this successfully, or even wanting to see this particular film again, but it’s a remarkable achievement,” he writes.
“It evokes the nameless dread of barely verbal childhood so thoroughly and uncompromisingly that it remains frightening long after it ends, not because it forces us to question the rational world, but because it makes us remember a time before we could understand anything at all.”
Variety’s William Earl has the scoop on what’s next for Ball. “He’s currently kicking around two ideas that both sound like a logical extension of Skinamarink. One is a take on the Pied Piper legend, the other about three strangers who all see the same house in a dream.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Jordan Peele taps Swiss cinematographer Hoyte Van Hoytema to build a custom IMAX camera rig to capture the wide-open landscapes of “Nope.”
March 10, 2023
Posted
January 30, 2023
“Poker Face:” The Sunday Mystery Movie But It’s Streaming
TL;DR
“Knives Out” and “Glass Onion” director Rian Johnson talks about his exciting new Peacock case-of-the-week series “Poker Face,” starring Natasha Lyonne as a mystery-solving fugitive.
Johnson discusses the challenges of writing a mystery series where the main character has the superhuman ability to recognize when someone is lying, and the importance of crafting standalone TV episodes even in an increasingly serialized era of TV.
Johnson calls this mystery subgenre a “howcatchem,” where it’s very much about the detective versus the guest star of the episode.
To make new television, it helps if you’ve watched a lot of old television. That’s a lesson evident in Poker Face, the crime-thriller series created by Rian Johnson and starring Natasha Lyonne, which makes its debut January 26 on Peacock.
Lyonne — creator and star of Netflix series Russian Doll — plays Charlie Cale, a woman employed by a casino with a preternatural ability to tell when people are lying.
As Johnson, the writer and director of Knives Out and Glass Onion, explained to Dave Itzkoff of The New York Times, the self-contained installments of Poker Face are a deliberate throwback to a style of TV storytelling that Johnson grew up with in the 1970s and ‘80s.
“That’s when I had control of the television,” Johnson said. “And it was typically hourlong, star-driven, case-of-the-week shows.”
They weren’t only detective programs like Columbo and Murder, She Wrote, he said, but also adventure series like Quantum Leap, The ATeam, Highway to Heaven and The Incredible Hulk, which were notable for “the anchoring presence of a charismatic lead and a different set of guest stars and, in many cases, a totally different location, every single week.”
Those ever-changing elements kept things fresh and surprising, he said.
In an interview with Alison Herman for The Ringer, Johnson was asked about similarities between Poker Face and Columbo. “I’d include The Rockford Files and Quantum Leap, but also Highway to Heaven and [the 1978 TV series] The Incredible Hulk,” he told Herman. “It’s kind of got the DNA of all that stuff. And that’s the stuff that I was sitting on the rug in front of my family’s TV watching reruns of every single afternoon as a kid. It’s the TV that I was raised on.”
“The ‘DNA’ of these shows is certainly there when you watch an episode of Poker Face play out,” Ernesto Valenzuela writes at SlashFilm. “Bill Bixby’s man-on-the-run character of Bruce Banner in The Incredible Hulk, who helps bring justice to whatever town he ends up in, is recreated with a much less gloomy angle with Charlie’s fugitive status. And much like Jim Rockford in The Rockford Files, Charlie is also down on her luck, living in a mobile home in the first episode where she is very much not an officer of the law. Poker Face thrives off of its influences, and the structure’s repetitive nature isn’t a detriment to the show — it’s actually a big reason viewers should tune in every week.”
The show’s resemblance to Colombo is a good thing for fans, Amos Barshad notes at Wired. Johnson, says Barshad, “tiptoed” around the issue at first but the jig was up following an interview in Vulture with The Mountain Goats’ John Darnielle: “I was probably bugging [Johnson] about something and he texted, ‘Want to talk to you about this TV show I’m doing with Natasha Lyonne. It’s basically Columbo with her as the detective,’” Darnielle said.
“To everyone who loves Columbo, this is a great thing,” Barshad writes. “Even beyond the unique format, with the murderer reveal happening first — which means it’s a howcatchem, not a whodunit — Poker Face embodies the rough, throwback, blaringly uncool charms of its spiritual antecedent. Like Columbo, Cale is often going after the rich and powerful, the kind of people who think they shouldn’t have to atone for their sins. Like Columbo, she’s constantly underestimated, a trait she finesses to her ends. Peter Falk’s portrayal of the fumbling detective is an all-timer; the way he pivots from buffoon to razor-sharp gumshoe is a thing of beauty and joy, which means Lyonne has her work cut out for her if she wants to put Cale up on the mantle with Columbo in the TV Sleuths Hall of Fame. But in the handful of episodes available so far (a new one dropped today), it’s clear Lyonne — salty, resilient, irrationally confident — is presenting a very unique kind of crimefighter.”
In his review for The New York Times, chief television critic James Poniewozik calls Poker Face “the Best New Detective Show of 1973,” noting that Lyonne “has one of TV’s most distinctive presences, with an old-soul rasp and a hipster-next-door bearing that’s simultaneously down-to-earth and cosmic.”
He adds: “The logo may say Peacock — the streaming service that premieres the series on Thursday — but the vibe says NBC weeknights in the 1970s.”
The series, Poniewozik says, draws you in with its retro style, “but the vintage echoes are also deeply thematic. The ‘70s loved a beautiful loser, like James Garner’s Jim Rockford, the ex-con private eye whom the world gave the bum’s rush no matter how many cases he cracked.”
Poker Face is not a whodunit but an “open mystery” because the audience starts out each episode by seeing who did it, how, and why, before Charlie begins to investigate. Johnson himself calls this mystery subgenre a “howcatchem,” where it’s very much about the detective versus the guest star of the episode, as Johnson also confirms to Brandy Clark at Collider: “These are not whodunits, these are howcatchems. Show the killing, and about Natasha [Lyonne] vs. the guest star.”
As Clark points out, the benefit of these types of shows is that a viewer can jump in at any time, without wondering or worrying if they need to see the previous episodes to understand the story or the plot.
Of course, Columbo is the key reference point and an acknowledged part of Daniel Craig’s character Benoit Blanc in the Knives Out mysteries. Johnson told Rolling Stone’s Alan Sepinwall that he binged the entire series during lockdown.
“My big revelation from bingeing it is, I wasn’t coming back for the mysteries. Although the mysteries are fun, I was coming back to hang out with Peter Falk. And in that way, I feel like those shows have as much in common with sitcoms as they do anything else.”
He added, “It’s not really about the story or the content. It’s about just hanging out with somebody that you like, and the comforting rhythms of a repeated pattern over and over with a character that you really liked being with. That’s kind of what I saw when I watched Natasha in Russian Doll, that made me think this could be interesting.”
Lyonne also said that she loved characters such as Columbo, Elliott Gould’s Philip Marlowe in The Long Goodbye and Dennis Franz’s Andy Sipowicz in NYPD Blue, as reported by Deadline’s Peter White.
Speaking at NBCUniversal’s TCA press tour, Lyonne said that Charlie is “floating above a situation trying to crack a riddle, but also an everyman who has their nose to the grindstone and figuring out the sounds of the street.”
Once Johnson had decided to make her a human bullshit detector, rather than a detective or a mystery writer, he realized he had a problem, but this became the key to unlocking how the show might unfold.
“How was the show just not over within the first five minutes, if she can tell when people are lying?” he told Rolling Stone. “I had her give a speech in the pilot about how it’s less useful than you think because everyone’s always lying. It’s about looking for the subtlety of why is somebody lying about a specific thing. And we found really fun ways to play that at different episodes going forward.”
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Peacock
Hong Chau as Marge in “Poker Face.” Cr: Peacock
Natasha Lyonne as Charlie Cale and Chelsea Frei as Dana in “Poker Face.” Cr: Peacock
Brandon Micheal Hall as Damian in “Poker Face.” Cr: Peacock
Benjamin Bratt as Cliff Legrand in “Poker Face.” Cr: Peacock
Simon Helberg as Luca in “Poker Face.” Cr: Peacock
John Hodgman as Narc/Dockers in “Poker Face.” Cr: Peacock
Natasha Lyonne as Charlie Cale and Benjamin Bratt as Cliff Legrand in “Poker Face.” Cr: Peacock
Lil Rel Howery as Taffy in “Poker Face.” Cr: Peacock
Judith Light as Irene Smothers and S. Epatha Merkerson as Joyce Harris in “Poker Face.” Cr: Peacock
Dascha Polanco as Natalie in “Poker Face.” Cr: Peacock
John Darnielle as Al, Chloë Sevigny as Ruby Ruin and G.K. Umeh as Eskie in “Poker Face.” Cr: Sara Shatz/Peacock
Chuck Cooper as Deuteronomy and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Sara Shatz/Peacock
Chloë Sevigny as Ruby Ruin and G.K Umeh as Eskie in “Poker Face.” Cr: Sara Shatz/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Sara Shatz/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
POKER FACE — “The Orpheus Syndrome” Episode 108 — Pictured: Natasha Lyonne as Charlie Cale — (Photo by: Karolina Wojtasik/Peacock)
Luis Guzman as Raoul and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Adrien Brody as Sterling Frost Jr. and Benjamin Bratt as Cliff Legrand in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Sara Shatz/Peacock
Jack Alcott as Randy, Charles Melton as Davis, and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Danielle MacDonald as Mandy in “Poker Face.” Cr: Karolina Wojtasik/Peacock NUP_197591_00014-
Lil Rel Howery as Taffy in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Adrien Brody as Sterling Frost Jr in “Poker Face.” Cr: Phillip Carus/Peacock
Dascha Polanco as Natalie and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Evans Vestal Ward/Peacock
Megan Suri as Sara in “Poker Face.” Cr: Evans Vestal Ward/Peacock
Natasha Lyonne as Charlie Cale and Megan Suri as Sara in “Poker Face.” Cr: Evans Vestal Ward/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Although Johnson is red hot and you’d think people would be biting his hand to work with him, he says pitching a more old-fashioned TV format got push back.
“I was unprepared for the blank stares. And then the follow-up questions of, “Yes, but what’s the arc over the season?” I think there is right now this odd assumption that that’s what keeps people watching, just because there’s been so much of that in the streaming world that I think people equate the cliffhanger at the end of an episode with what gets people to click ‘Next.’ But TV before incredibly recently, was entirely in this episode mode. So I know it can work because I grew up tuning in every day for it.”
One reason it’s harder to do episodic case-of-the-week stories is the expense and the production challenge. For example, you keep have to bringing in new guests and visiting new locations.
“Holy crap, it was a headache,” Johnson admits to Rolling Stone. “I don’t think we even realized what we’re up against. No standing sets. No recurring characters besides Natasha and occasionally Benjamin Bratt. But we’re very purposefully going for the Columbo approach of big fish guest stars. So every single one of these episodes, we try and get somebody very exciting to play either the killer or the victim. And it was a lot.”
Indeed, the cast list across the season includes Adrien Brody, Ellen Barkin, Nick Nolte, Stephanie Hsu, Joseph Gordon-Levitt, Ron Perlman, Chloë Sevigny, Lil Rel Howery, Clea Duvall, Tim Blake Nelson, and many more.
Asked during a Q&A panel at the Winter Television Critics Association Presentation whether he writes specifically to those guest stars, he replied: “In the room, sometimes we’d have a placeholder actor, and it would end up being them, or surprisingly someone else. A benefit of this subgenre is that it is the guest star’s episode, and you see them go head-to-head with Natasha.”
Johnson continued to sing the praises of television in front of the ballroom full of television reporters and critics — saying he preferred the “pace” of this newfound process vs. film. Each hour-long Poker Face episode took about three weeks (one for prep, two for shooting) to complete. Compare that with making one film over the course of “several years,” as he put it.
“I loved that in each episode we’re in a different environment, it’s a whole new cast— it’s like making 10 mini movies,” Johnson told IndieWire’s Tony Maglio. “I literally dove into it like it was one of my movies. I really jumped completely into the deep end of the pool.”
Johnson has previously directed for TV, notably on two episodes for Breaking Bad including the show finale “Ozymandias.” Episode two of Poker Face, which he directed, was shot in Albuquerque.
“I haven’t been back there since we shot ‘Ozymandias,’” he told Rolling Stone. “It was so much fun being back in town. A lot of the same Breaking Bad crew were on our crew, and it felt like a little homecoming.”
Johnson explained to Angela Watercutter at Wired that while Poker Face does have a throughline, any given episode is a standalone. That was “a hugely conscious choice,” he said, “something that I had no idea was gonna seem so radical to all the people we were pitching it to. The streaming serialized narrative has just become the gravity of a thousand suns to the point where everyone’s collective memory has been erased. That was not the mode of storytelling that kept people watching television for the vast history of TV. So it was not only a choice, it was a choice we really had to kind of fight for.”
Johnson discussed director Robert Altman’s influence on the pilot episode of Poker Face in a Q&A with Joshua Encinias for MovieMaker Magazine:
“Altman’s Nashville was definitely a big reference point for me when I was approaching how to shoot the pilot. I will say each one of the episodes very much has its own personality. Colombo was set in Los Angeles, but he was diving into a different profession every single time, based on what the killer did. There’s an anthropological element to it, where you’re doing a little deep dive into a different world every time. That’s very much a part of the show going forward and we allow ourselves, tonally, to give ourselves over to that. There’s an episode, for instance, set in a regional dinner theater with Ellen Barkin and Tim Meadows that’s absolutely hilarious and very comedic in tone, almost like a Noises Off style. The one I did with Joseph Gordon-Levitt is set in this snowed-in motel within the Rockies and it’s almost more like a Coen Brothers horror movie. But yes, absolutely, in the pilot I was looking at a lot of Altman. Also in terms of the looseness of the style of shooting. It seemed like a fun route to go.”
Another Altman film also provided influence in Poker Face. “Janicza Bravo directed one of our episodes and I think she put a very subtle, intentional California Split reference in her episode,” Johnson said. “We talk about that movie a lot on set.”
As the streaming wars rage on, consumers continue to be the clear winners with an abundance of series ripe for binging. See how your favorite episodics and limited series were brought to the screen with these hand-picked articles plucked from the NAB Amplify archives:
Editor Bob Ducsay, ASC on the layers of structure and sleight-of-hand behind writer-director Rian Johnson’s “Knives Out: Glass Onion.”
March 19, 2023
Posted
January 29, 2023
“M3GAN:” James Wan, Gerard Johnstone, and Jason Blum Know What You Want
TL;DR
Hit movie “M3GAN” has busted the $100 million worldwide ticket sales barrier on a $12 million production budget.
Director Gerard Johnstone was inspired by horror-comedies like Edgar Wright and Simon Pegg’s rom-zom-com “Shaun of the Dead.”
New Zealand actress Amie Donald played the demented AI and ended up doing her own stunts.
Perhaps it’s our underlying fear of what AI will lead to, or a horror jolt that we needed to kickstart our year, but the hit movie M3GAN is busting the $100 million worldwide ticket sales barrier on a $12 million production budget. Also, a generous PG-13 rating has lured in the teenage market with even younger kids finding a way into theaters to catch horror-comedy at its best.
Vanity Fair’s Julie Miller looked into the toy slayers and analyzed the genre. “The killer doll trope is nothing new — 60 years ago, a pigtailed doll in ribbons and ruffles named ‘Talky Tina’ took out an evil stepfather in a Twilight Zone episode,” she writes.
“In the decades since, there have been knife-wielding dolls, deranged puppets, demonic fetish figures, and diabolical porcelain dolls fronting horror films.” But maybe the effect is easily explained by Frank McAndrew, a psychologist who has researched the concept of creepiness.
“They have eyes and ears and heads and all of the things that normal human beings have,” explains Frank “But there’s something off — the deadness in their eyes, their blank stares. They’re cute and they’re supposed to be for children,” he says, but the human realism causes “our brain to give off conflicting signals. For some people that can be very discomforting.”
McAndrew further defines that dolls are especially effective horror-movie antagonists because murderous streaks seem so unlikely in a child’s toy.
But perhaps the most interesting aspect of M3GAN is how a seemingly CGI-laced movie was made for only $12 million. The mid-sized budget was perhaps a consequence of shooting in New Zealand during COVID — since at the time the country hadn’t yet been exposed to the pandemic. But it was also due to the skills of a young local actress, Amie Donald, who played the demented AI and ended up doing her own stunts.
Amie Donald as M3gan in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan, Allison Williams as Gemma, and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Allison Williams as Gemma in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan, Allison Williams as Gemma, and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Ronny Chieng as David in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan, Allison Williams as Gemma, and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Jen Yamato at the Los Angeles Times tracked down the actress’s movement coaches. “Casting local performer and international competitive dancer Donald, now 12, to physically embody M3GAN turned out to be fortuitous. Although it was her first film role, the actor, who has also since appeared on Sweet Tooth, was off book within a week and loved doing her own stunts. ‘She was just extraordinary,’ says director Gerard Johnstone,” Yamato reports.
“Working with movement coaches Jed Brophy (The Lord of the Rings) and Luke Hawker (Thor: Love and Thunder) and stunt coordinator Isaac ‘Ike’ Hamon (Black Adam), she developed M3GAN’s physicality, which becomes more humanlike the longer she’s around humans. She adopted barely perceptible movements — a slight cock of the head, a step a bit too close for comfort — to maximize the unsettling effect M3GAN has on people.”
Donald proved to the director how well she could do her own stunts and even on the first day of shooting, nailed the all-fours forest move you can see on the trailer after perfecting it at home. “All of a sudden we get this video from her mother, where Amie had just figured out how to do this on the carpet at home,” said Johnstone. “And she could run on all fours!”
CGI was definitely minimized in the movie, but WETA Workshop contributed additional designs to the film, and Oscar-nominated Adrien Morot and Kathy Tse of Montreal-based Morot FX Studios were entrusted to smooth out the joins of animatronics, puppets, posable and stunt M3GANs, as well as Donald herself.
Director Gerard Johnstone was also keen to bring a level of humor to the movie and find ways to echo his own experience of parenthood, as he told Gregory Ellwood at The Playlist. “But what I brought to it was definitely my own sense of humor and my own experiences as a parent. I wanted to put as many of my own struggles and anxieties and frustrations that I was having as a parent into this movie. Parenting in the age of AI and iPads isn’t easy.”
Speaking to Valerie Ettenhofer in an interview for Slash Film, Johnstone cited Edgar Wright and Simon Pegg’s rom-zom-com Shaun of the Dead as teaching him a significant lesson in style. “My big lesson from them when I first watched Shaun of the Dead… was just how seriously they took both genres,” the director shares. “If I was going to do this, I had to treat the horror as seriously as I did the comedy.”
Johnstone struck a balance between horror and comedy with his first film, Housebound, which he continued with M3GAN, Ettenhofer notes, “a movie that offsets its most violent and unsettling scenes with moments in which the titular android does a hair-twirling dance or breaks into spontaneous song.”
Johnstone also namedrops a few other greats that he considers fun horror touchstones. “I’m a big fan of Sam Raimi, Drag Me to Hell and The Evil Dead trilogy.” He also commends Wes Craven, plus the “very deadpan” films of Joel and Ethan Coen, which he says employ “just a very dry tone, but you can tell they’re secretly making comedies.”
All the film references in the world mean for nothing, however, when your movie becomes a litany of Internet memes, which M3GAN quickly generated. Karla Rodriguez at Complex put it to the director that once a part of your movie or a part of the trailer becomes a meme, you know you’ve struck gold.
“And they were amazing,” picked up Johnstone, “and I just couldn’t believe how many of them there were. But I thought they were giving too much away in the trailer of the dance scene. I was like, ‘I just want a hint of it, something weird happening to tease people.’ And Universal said, ‘You don’t know what you’re talking about.’ And I didn’t know what I was talking about clearly because people just took it, recut it, put it to different music and it was just the gift that kept on giving.”
So where does that leave the psychotic M3GAN doll? A scary range of merch maybe, but definitely at least one sequel because, like artificial intelligence, we just can’t get enough of her. Producer Jason Blum has already said as much. “Blum did something he’d never done in his nearly 30-year career: He publicly admitted his desire to make a sequel before the movie even opened in theaters, Rebecca Rubin reports at Variety. “He just felt certain that audiences would instantly fall in love with M3GAN, short for Model 3 Generative Android, whose chaotic dance moves, pithy one-liners and killer tendencies turned her into an internet icon as soon as Universal debuted the first trailer.”
“We broke our cardinal rule,” he says. “I felt so bullish that we started entertaining a sequel earlier than we usually do.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Jordan Peele taps Swiss cinematographer Hoyte Van Hoytema to build a custom IMAX camera rig to capture the wide-open landscapes of “Nope.”
January 31, 2023
Posted
January 29, 2023
“Troll:” Norway’s Motion Blur Makes a Modern (Ancient) Kaiju
TL;DR
Pushing aside “RRR” in the global marketplace, Norway’s “Troll” has become Netflix’s best performing non-English film.
Partly inspired by “King Kong” and Godzilla films, “Troll” employs many classic monster movie tropes with a distinctly Norwegian spin.
The character design for the titular troll was inspired by paintings by Theodor Severin Kittelsen, one of Norway’s most popular artists.
Espen Horn, producer and CEO of production company Motion Blur, said it was important that the production use Norwegians as crew, SFX and VFX vendors as much as possible, “because we wanted to show the world that this was genuinely a Norwegian or Nordic film.”
Troll from Netflix has seen some highly impressive viewing figures since its arrival on the platform and quickly became its best performing non-English film. This breakdown comes from Naman Ramachandran at Variety: “With a total of 128 million hours viewed and still counting, the film has taken the top spot on the non-English Netflix Top 10. It is in the Top 10 in 93 countries including Norway, France, Germany, the US, the UK, Japan, South Korea, Brazil and Mexico.”
Monster movies have always had a wide fan base and Troll has all the attractions and tropes those fans like — a cityscape destruction, believable and well-executed VFX, a credible folklore backstory, and a monster with feelings and a purpose. Something that Renaldo Matadeen picked up on in his review for Comic Book Resources.
“The remains of his tribe got left in a palace under the Royal Palace, which means the troll king’s domain has been desecrated. So, he’s stomping his way to Oslo to destroy the place for what happened to his family and to crush the symbol of Christianity, politics and corruption.”
Yes, Troll was partly inspired by King Kong, including Godzilla vs. Kong, but don’t forget Cloverfield with its clever “monster in a city” reality. But one of the most important aspects of the production was to keep it very Norwegian notwithstanding the monster action at its core. Espen Horn, producer and CEO of Motion Blur, explained the vision. “It was a big and important dream for us. That we should use Norwegians as crew, SFX and VFX vendors as much as possible because we wanted to show the world that this was genuinely a Norwegian or Nordic film,” he said.
“That was very important as even as the film has a classic monster genre formula to it, as some people claim, it was important to us to maintain originality in terms of the characters, mythology and the nature of how we are as people. I think the audience were happy that we kept the Norwegian originality.”
Mads Sjøgård Pettersen as Captain Kristoffer Holm, Ine Marie Wilmann as Nora Tidemann, and Kim S. Falck-Jørgensen as Andreas Isaksen in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Gard B. Eidsvold as Tobias Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Kim S. Falck-Jørgensen as Andreas Isaksen, Mads Sjøgård Pettersen as Captain Kristoffer Holm and Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann, Mads Sjøgård Pettersen as Captain Kris, and Kim S. Falck-Jørgensen as Andreas Isaksen in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Mads Sjøgård Pettersen as Captain Kris, Kim S. Falck-Jørgensen as Andreas Isaksen, and Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Mads Sjøgård Pettersen as Captain Kristoffer Holm, Ine Marie Wilmann as Nora Tidemann, and Kim S. Falck-Jørgensen as Andreas Isaksen in writer-director Roar Uthaug’s “Troll.” Cr: Netflix