Whether You’re a Photographer or Cinematographer or Both, Here’s How to “Think Like a Filmmaker”
Jon Silberg
The transition from still photographer to cinematographer should be simple, right? Cinematography is just 24 still images per second.
That’s what Jeff Berlin thought when he started the process of expanding from successful fashion and editorial photographer, shooting internationally recognized talent in beautiful locations throughout the world to becoming a professional cinematographer.
But as he explains in this talk presented by B&H Photo, Berlin learned that the two jobs have quite a few differences, both in terms of the approach to the artistry and the specifics of the job description.
After presenting some of his work in fashion and celebrity portraiture, and his back story peppered with info about the six years he’d run off to plum assignments from his home bases in Paris or Milan, he talked about a number of the still photographers who provided him with references and inspiration in his work.
On the path to becoming any kind of visual artist, he says, “you develop and cultivate your sensibility, and you educate your eye.” In the world of stills, he developed a strong familiarity with such greats as Irving Penn, William Eggleston, Dorothea Lange and Richard Avedon.”
While those artists’ work will likely always inform Berlin’s, as a cinematographer, he says, you’ve got to “find your references.” He speaks about classics of cinematography such as Days of Heaven from director Terrence Malick and shot by Nestor Almendros and The Danish Girl, directed by Tom Hooper with cinematography by Danny Cohen — “It’s just a really, really lovely film,” he enthuses — or Mike Nichols’ The Graduate, for which Robert Surtees served as cinematographer. These and a number of others, he says, “have become touchstones.”
Cinematographer Steven Bernstein (Monster, directed by Patty Jenkins), “has been my mentor through a lot of this journey,” Berlin says, noting that the DP was among the first to explain to him some of the differences between the two skills.
Berlin says he comes “from a world where you’re looking to shoot the most beautiful images,” but while that is sometimes what directors are looking for it certainly isn’t always.
Now that he’s shot a number of different projects, he references a director’s treatment for a short film he shot. “‘I don’t like super sharp images. Ever.’ People talk so much about resolution and sharpness, but that isn’t necessarily what filmmakers want to tell their story.”
Berlin also touches on the different vocabulary in the two fields. “A tripod,” he says, “becomes ‘sticks. People don’t talk about the F-stop; they talk about the T-stop. There are [cinematography-specific] composition terms such as French Overs and “Dirty Overs.”
Not that it’s terribly challenging learning the new argot, but it can be interesting going from being a highly in-demand still photographer to having to learn some basic terminology to shoot motion pictures.
Of course, cameras are an indispensable part of either profession and Fassbinder goes into some depth about his findings as a cinematographer seeking “the best camera for the mission.”
The very top tier of motion picture camera is perfect for situations where something costing many tens of thousands of dollars and requiring a certain size crew to just to move it around and ensure its smooth operation. But there are some cameras costing only a few thousand dollars that can be perfect for shoots requiring a smaller crew and more modest footprint. Often, he points out, a number of features and TV productions mix and match.
Berlin speaks in terms of the Sony gear he uses, but the underlying concepts can easily be transposed to equipment from other major manufacturers.
He speaks about the image quality, from sensor to encoding, of his Venice 2, which completely “kitted out” runs about 90 grand, and the FX series (3, 6 and 9), the cheapest of which costs about $4000.
Berlin also talks about dual base ISO, which many Sony cameras (as well as Panasonic and others offer) have, which can allow for extreme low-light shooting and day exterior cinematography without the most of quality sacrifices involved in the traditional approach to exposure index, which has generally pumped the gain way up to enable really low-light shooting.
And he goes into the importance of ND (neutral density) filters as a way of controlling exposure in brightly lit conditions without having to change T-stop (and thus alter the depth of field) or ISO setting. (Sony cameras’ built-in ND filters do offer a convenience some competitors do not).
In sum, he says, it’s comparatively easy to be a solo still photographer. “When you’re in a studio doing a campaign, you have your crew, you have your hair, your makeup, your stylist, your assistant…I would sometimes have three assistants, depending on the kind of job that I was doing.
“But filmmaking is always a team sport. The still photographer is the director, the DP and the gaffer. On a film set, everyone has their own role.”
Furthermore, “as a still photographer, you really want to have a style that identifies you, that that individualizes you and gives … an identity to your work. A cinematographer really shouldn’t have a style; you are there to support the vision of the director.”
Futurists Agree “AI Won’t Be the Hollywood Version”
TL;DR
The speed at which AI is advancing has shocked most experts in the field but others think our fear is misplaced and that actually there’s a lot to be optimistic about.
By 2027 nearly a quarter of the workforce will be disrupted by AI in some form — but that means augmented or assisted by AI, not necessarily automated.
AI is not a single entity but best thought of as a plurality of AIs, each with specific jobs, some of them will have consciousness.
The speed at which AI is advancing has shocked most experts in the field but others think our fear is misplaced and that actually there’s a lot to be optimistic about.
One of them is futurist Sinead Bovell, who contends the current challenges of AI are like the internet in the early ages of email.
“Since we don’t really know how things are going to transpire how things are going to evolve we’re tuning in a lot to Hollywood’s version of the future.
“Of course, some dystopian futures are possible but I don’t think that’s where we necessarily have to end up,” she said. “There are a lot of amazing people working on things like AI safety, and alignment. So I think we have a we have a good shot, if we can get our act together.”
Bovell was speaking on the “Futurists” episode of Bloomberg’s AI IRL podcast, where she predicted that nearly a quarter of the workforce will be disrupted by artificial intelligence over the next five years. But that doesn’t necessarily mean their jobs will be eliminated by automation — more like augmented by AI.
“For sure, certain tasks will get automated, but that’s different than an entire job,” she says. “It doesn’t matter what job you’re in, you have to figure out how to start using AI tools.
“Over the next 15 years, most of the jobs [impacted by AI] probably haven’t been invented yet — like a social media manager didn’t exist 15 years ago. And now, if a company doesn’t have one it’s toast.”
Proclaiming himself to be “very excited” and “incredibly optimistic” about the future of AI, Kevin Kelly — a senior Maverick at Wired — likens the transformative power of AI to electricity, the printing press, and even language.
“I’m optimistic because so far the benefits certainly outweigh whatever negatives and problems there are,” he told Bloomberg in the same episode. “I think that the problems are smaller, and fewer than we think [and] I think our capacity to solve the problems are greater than we think. So just as AI’s problems are new and powerful, our ability and will to solve them is also increasing.”
Nor does he think that the change of AI on society will happen as fast as some fear.
“This a centuries-long journey that we’re on. We’re gonna be having this conversation for the next century. So we have time to adjust and we’re already rapidly adjusting to these things as [new versions] come out within months. The versions are incorporating the objections that people have — whether it’s copyright or bias — and that’s one of the reasons it gives me optimism about our ability to control this as we go forward.”
Kelly points out that AI is not a monolithic entity.
“There are going to be many AIs, many varieties, many species [of AI]. We’re seeing that happening already. The kinds of AIs that might drive your car can be different from the ones that are doing the translation from one language to another in real time, which might be different than the ones that you’re using to make an image. We certainly can generalize some aspects of them but I think it’s very important to make sure that we talk in plurals.”
Some of these AIs are going to be conscious, he predicts, but this will be added in deliberately by humans for specific use cases.
“Some of them may have a little bit of consciousness [but] it’s not binary, it’s kind of a gradation with many varieties. Consciousness is not necessarily something we’re going to put into most AIs, because it’s a liability in most cases.”
Aimed at empowering creatives, the AI Creative Summit will be held at NAB Show New York Oct. 24-25.
July 31, 2023
How Steven Soderbergh Brings It All Together for “Full Circle”
Timothy Olyphant and Claire Danes in the Max limited series “Full Circle”
TL;DR
In a wide-ranging discussion covering AI, branching narratives, and shooting efficiently, director Steven Soderbergh looks back on the production of Max’s limited series, “Full Circle.”
The series used the new RDX System from Rosco for the virtual production of apartment interiors.
Neither Soderbergh nor collaborator Ed Solomon believe AI will ever match up to the creativity of the live human experience — but that doesn’t mean they won’t use AI as a tool.
Full Circle, a six-part series that just completed its run on Max, is a melodramatic crime drama series with interconnected storylines and hidden secrets, taking viewers on unexpected twists and turns.
Director Steven Soderbergh collaborated with writer Ed Solomon and together they talked about the project during an hour-long roundtable with a handful of trade outlets.
On Shooting Long Takes
One of the hallmarks of the show’s visual style is a tendency toward long takes that present the action at a distance without punching in excessively for close-ups. According to Soderbergh, as Jim Hemphill reported in IndieWire, those long, intricately choreographed takes have a practical component as well as a desirable emotional effect: They allow him to work faster.
“The thing that takes time when you have a lot of work to do in a day is unnecessary coverage,” the director said. “If you can rehearse and block and stage something and know where the cuts are coming before you’ve shot it and you don’t capture any redundant material and you’re not doing 20 of 30 takes of stuff, you can move pretty quickly.”
In the endeavor to shoot efficiently, much of the show’s interiors were shot on a volume stage. While Soderbergh initially hoped to shoot these scenes on location in an apartment near New York’s Washington Square Park, various factors led to the production opting to shoot inside a sound stage instead. Instead, the production team decided to use the new RDX System from Rosco.
Phil Greenstreet, Rosco’s head of development for backdrops & imaging, went on the location scout around the apartments near Washington Square Park and shot hundreds of images with a Fuji 100 GFX camera. The apartment set was modified with long hallways for Soderbergh’s roving camera.
“They didn’t want to be messing with motion,” Greenstreet told Bill Desowitz at IndieWire. “They didn’t even want motion in the background, so the flags weren’t moving, the cars weren’t moving, you only see small slivers of cars in the distance anyway.”
Soderbergh explained to IndieWire, “I love what you get from [RDX] and the ability to go from one look to another in a matter of seconds. Literally, I can move the image around, I can adjust the contrast, I can adjust the brightness, I can blow things up, I can shrink them. There’s no other way to get this interactive, refractive light bouncing around the room off the surfaces with that kind of technology.”
Soderbergh and Solomon originally intended Full Circle to be a branching narrative like their 2018 HBO series Mosaic, which gave viewers the option to choose different outcomes for the story via the app.
“On Mosaic, we were able to do that, because that was repurposing the footage to use in both ways. I was using the same footage for the linear version that I was using for the app. That’s why that was not a problem,” Soderbergh explained during the roundtable, as quoted by The Hollywood Reporter’s Hilary Lewis. “My vision for the app version of Full Circle was completely different imagery, completely different approach directorially, different cameras, different everything.”
The Full Circle script was 400 pages, Soderbergh said, with the app version consisting of an additional 170 pages “in which there’s no overlap,” he said.
“I can shoot fast, but I cannot shoot that fast. We had to throw all of that away [though] some of those 170 pages leaked its way back into the linear version.”
The process made Soderbergh question whether there’s any real place for branching narratives in narrative storytelling.
“It’s not clear to me that this form of storytelling is needed or even wanted by audiences. In a primal sense, around the campfire or a dinner table, if somebody pulls the attention of the group to tell a story, the people in that group are expecting and wanting to hear a story that resolves itself. They don’t want to hear somebody tell a story at a dinner table in which they go one way, and then they back up and go, ‘Or it could go this way.’ That’s not what you want. I think there’s a very strong impulse for people to want to be told a story like, ‘You’re the storyteller. Tell me a story. Don’t make me do the work. That is your work.’
“That’s what I’m beginning to think. So it’s a real question whether or not I would return to that format without an idea that I feel can only be executed properly in that format.”
“That’s what I’m beginning to think. So it’s a real question whether or not I would return to that format without an idea that I feel can only be executed properly in that format.”
Asked for his thoughts on AI, Soderbergh said it could be helpful as a tool but he has doubts about AI’s ability to mimic the lived human experience.
“It doesn’t know what it means to have a flight cancelled and have to figure out how to get home,” he said, as quoted by Christina Radish at Collider. “At a certain point, that’s a real problem. You have to remember, its only input is data, text and images. It has no body temperature. It doesn’t know what it means to be tired.”
He added, “I think it’s useful for design creation… as a basic way to accumulate a framework. Let’s say it writes a script and it’s supposed to be a comedy script that ChatGPT has generated, and you say to it, ‘It needs to be funnier.’ And it says, ‘How?’ And you go, ‘I don’t know, it just needs to be funnier.’ What does it do? It’s just a tool. But if you asked it to design a creature that’s a combination of a cat and a Volkswagen Beetle, it can do that. That’s fun.”
This naturally segued onto a discussion of AI’s implications on industry jobs. Solomon doubled down on his belief that art made by human beings cannot be replicated.
“The problem is, the people making decisions on the highest level are all about the bottom line and “How can I get rid of as many human beings as possible?” [and they] don’t have the ability to judge what is good art and not good art. If we don’t draw a line in the sand now, my fear is we’re going to continue to a place where a lot of people are [going to be] out of work.”
But while viewers may have originally tuned in to see Claire Danes, Dennis Quaid, Timothy Olyphant, Zazie Beetz and Jim Gaffigan, those who stayed with the limited series saw a story take about two Guyanese teenagers take center stage.
“You think it’s about this group of well-off white people being victimized. And then over the course of the show, the whole thing starts to tilt,” Soderbergh told the group of reporters, as quoted by THR. “By the end of it, we’re in a very different place than where we started. So it was this melodrama that had this very interesting subterranean thematic thread bubbling along that eventually comes up and takes primacy in the last two episodes.”
The series ends with the lead Guyanese characters walking around the unfinished Colony at Essequibo, the ill-fated development that connected them with Danes’ character’s family, and a pan over to a billboard advertising that the aborted project is “coming in 2003.”
“From the very, very beginning of the script, it was all engineered to that one last shot,” Soderbergh said.
Critical Reception
Whether moving from character to character or balancing suspense and action, Full Circle thrives on efficiency, reviews Ben Travers of IndieWire.
“Taken as a creative twist on a tried-and-true format, it balances the experimental and the satisfying in a way TV should strive for more often, especially in an era when filmmakers are being asked to create content. If you’re going to churn out stories for streaming, you may as well maintain your artistic credibility.”
Gazing inward, Season 6 of media-tech satire “Black Mirror” contends that people prefer viewing content “in a state of mesmerized horror.”
September 19, 2023
Posted
July 31, 2023
“Sympathy for the Devil:” Nicolas Cage Makes Everything Better, Even LED Volumes
Video courtesy of Vū
TL;DR
Much of the new Nicolas Cage movie “Sympathy for the Devil” takes place inside a car, equating to more than a third of the film’s total screen time.
Initial plans to shoot these car scenes on real streets were disrupted by persistent torrential rains at the Las Vegas location, prompting the production to shift towards virtual production inside the Vū virtual studio.
Director Yuval Adler and cinematographer Steve Halloran found that using Vū’s LED volume for these shots provided better and more authentic visuals faster than possible with traditional on-road or greenscreen filming.
The Vū studio allowed for genuine reflections on the car’s glass, enhancing the realism of the shots. This shift in filming methodology led to a significant increase in efficiency, with Adler obtaining about 12 minutes of material in a day compared to potentially just one shot using traditional methods.Learn more in the video above.
A significant portion of the new Nicolas Cage movie, Sympathy for the Devil, unfolds inside a car as Cage’s terrifyingly histrionic character forces the driver (Joel Kinnaman) to drive him at gunpoint, committing horrible acts of violence along the way.
While writer/director Yuval Adler initially planned to shoot those scenes with a real along with the attendant headaches and delays of street closures, camera cars and rigs, his plans were undone by the prolonged torrential rains that plagued the Las Vegas location. So, the production decided to pivot and shoot all the car shots, which comprise more than one third of the film’s 90-minute screen time, inside the Vū virtual studio.
Yuval and cinematographer Steve Halloran report that shooting these shots from inside an LED volume enabled them to get better-looking shots much more quickly than they likely could have shooting out on real roads or on a greenscreen stage.
“When you have green screen or blue screen behind a car,” Halloran says, “it’s a lot harder to get authentic reflections in the glass. You put a car in the volume and put [a virtual] environment around it and reflections on top,” and you’re already closer to a shot that looks real.”
Yuval much preferred shooting in front of the LED wall “as opposed to being outside in a car, which is an absolute nightmare.” Comparing the two approaches, the director says with the former approach a shoot can wind up with something like one shot in a day, while on Sympathy, he was able to come away with about 12 minutes of material in the same timeframe.
In an interview for the Panavision website, Holleran outlined his creative approach further, noting that he got his start as a fine art photographer, inspired by the work of photographers like Henri Prestes and Christophe Jacrot, “who work heavily with surrealistic, hazy textures,” he said.
“I wanted there to be something akin to a soft veil across the image, as if we were in a nightmare upside-down world. The other reference was Las Vegas’ Neon Museum, which is a great boneyard of old neon lights from the city. Walking the museum at night was a wonderful playground for color inspiration, say the way primaries fade and turn strange with time. Then ultimately, it’s the themes in the film that have the final say.”
The DP also detailed his decision to shoot with Panaspeed optics sourced from Panavision Woodland Hills. “Often when choosing lenses, I start with two sets of parameters which don’t overtly align,” he said.
“On Sympathy, my first set of needs were lenses that were fast, lightweight, and had a range that leaned towards the wide side. This instantly cut out a large chunk of glass, much of it vintage, some modern. Then I wanted a specific creative look, for instance a set of glass that bloomed the highlights, had heavy halation, lifted blacks, with a cat-eye effect. Those prerequisites didn’t already exist together or were not readily available, so we turned to the modern Panaspeeds for their customizability, so we could ‘tune’ a look into a set of lenses that matched my technical specs.”
Sympathy for the Devil is “a surrealist pop-nightmare thriller set on the forgotten edges of Vegas,” Holleran says. “It’s got melancholy and rage fighting for space in front of the lens. We’re exploring the in-betweens of good and bad, truth and lies, past and present, right and wrong — both in the script and with the cinematography. It has upside-down, head-turning relativity at its heart, with who is good and what is true coming into question, and so the film’s look leans into asking these questions of the audience subjectively through composition, movement and color choice.”
Sympathy for the Devil, a film Yuval promises will offer “dark humor, violence and truth,” is open in theaters and available on streaming platforms. (Of course, those attributes are icing on the cake when you’re talking about a movie with Nicolas Cage doing full-on Nicolas Cage.)
LED cinematographer Erik “Wolfie” Wolford presents an in-depth demonstration of virtual production using LED walls at the Entertainment Technology Center’s vETC conference.
Wolford’s demo perfectly matches real and virtual lighting to create a realistic illusion of an actress standing on a sunlit beach, all controlled in real time through Unreal Engine.
His technical setup includes a HP Z6G 5G desktop workstation equipped with a Sapphire 1 Intel Xeon Processor and a pair of high-end RTX NVIDEA 6000 graphics cards running Unreal Engine.
Kino Flo’s new Mimik lighting, designed to create full spectrum foreground lighting for virtual sets and overcome the limitations of LED walls’ light for skin reflections, automatically adapt to scene changes inside Unreal Engine.
One of the hottest topics of conversation in the filmmaking community of late has been about the use of LED walls, or volumes, in production.
But few people have explained it in the level of detail as LED cinematographer Erik “Wolfie” Wolford, who this past June shared the real nuts and bolts of how he handles virtual production. His talk, “LED Stage Architecture: How It’s Built,” was presented during a session of the Entertainment Technology Center’s virtual conference, vETC.
Wolford, who’s shot music videos and documentaries, recounts his career path starting at the bottom rung on the production ladder on music videos for such visionaries as Spike Jonze and Michel Gondry. He eventually moved into lighting for special effects, particularly for green screen work, on features, commercials and music videos before focusing his creative energies on virtual production in his current role of LED cinematographer.
His first job involving Unreal Engine was the 2022 animated short Mr. Spam Gets a New Hat from director William Joyce and international VFX company DNEG. The 2D animators working on the project, he recalls, had to adapt to the challenges of 3D in the virtual world. “I came in and moved lights around, changed the size of the lights to make them wrap better, added a lot of shadows, and soon I was hooked!”
As Unreal Engine started to be used more for virtual production or ICVFX (in-camera visual effects), he was all over it.
To illustrate his talk, Wolford created a setup with an actress positioned in front of a digital wall. The real lighting on the actress and surrounding stage was matched with a virtual set of a beach, with the game engine controlling the interaction of foreground and background in real time. So convincing was the setup that the actress appeared to be standing on a sunlit beach, complete with a cliff and the ocean in the background. The illusion’s success lay in the seamless lighting — every element, real and virtual, seemed to be lit from the same source.
He breaks down the setup for the attendees. Unreal Engine is running on an HP Z6G 5G desktop workstation with a Sapphire 1 Intel Xeon Processor (Sapphire Rapids) and a pair of high-end RTX NVIDEA 6000 graphics cards. One engine runs Unreal Engine doing all the 3D movement based on the position of the camera, “just like first-person shooter games,” he explains, adding that changes in the positioning, depth of field and focal length on his real camera are immediately translated to the 3D background.
Then, there is another box which the first one feeds the signal into and that sends the signal out to through an ATM switcher to yet another box, which serves as the editor node. This third box feeds the signal into a Brompton Technology processor, which takes the background image and breaks it into many squares to deliver each square to an individual one-by-one panel.
The signal then goes back to the record decks and feeds the Megapixel Visual Reality Helios LED Processing Platform running special new lights that Kino Flo lent for this demo, called Mimik Lighting, designed specifically to allow users to create full spectrum foreground lighting for virtual sets. The Mimik lights allow the user to use the same technology that the LED wall uses but purely to create interactive lighting on the set.
“When we work with LED walls,” he says, “they look great to the eye. They look great to the camera. But they don’t create light that’s great for skin reflections.” To create the illusion that the set wraps around and you want the same color space as the set, so the Mimik light gives a full CRI (color rendering index) image — essential when using LED panes as a light source. “The reds look really red; it looks pretty good on skin. I can simulate more walls giving me the color I want, and it helps tie the actor into the magic of the scene.”
While Wolford also likes to use a lot of lights from Aperture, which also have an excellent CRI, are efficient and portable, and are easily managed with a smartphone, but, he says, “it takes me about 10 minutes to tune in a color. Then, if I change the scene from daytime to nighttime, I have to go change all my lighting.” Instead, he explains, “the Kino Flo Mimiks let me put the Unreal scene right into the Mimik light. Then the Mimik light just automatically updates as I change [the characteristics of the] scene. If I go from day mode into night mode, it will go into night mode. If the scene [on the LED wall] is a sunset, it will automatically generate sunset light.
“These Mimik lights,” he says, “are half LED wall and half actual proper film set light.”
He elaborates on the specific scene he’s set up for the demo: “We’re doing a little campfire beach scene, so we have a virtual light on the screen behind doing a virtual flicker of fire on the wall.” Then he places a physical Mimik light on the actress, which helps sell the illusion that this is all a real scene on a beach.
He demonstrates the real camera and the virtual camera running off Unreal that captures the motion of the real camera as it trucks left or right or dollies closer to the subject. The scene on the LED wall compensates with all the appropriate parallax applied to the scene, again, as he points out, “just the way a first-person shooter game works.”
Digging deeper into the technology, Wolford notes that the virtual camera’s positioning is assisted by a mo-sys box using its IR reader, which “observes” little stickers on the real camera by sending out an IR beam and uses the IR reflection to determine the relationship of the box and the real camera in order to translate that into positioning date for the virtual scene.
Capturing lens data, such as focal length on the zoom and aperture, is either done using a similar device on the camera’s lens that makes use of a gear apparatus to track the positioning of the lens barrel and sends that data into the Unreal Engine or, in the case of some newer lenses, that type of lens metadata is captured directly as part of the lens mechanism. Either way, zooming in or out, opening or closing the iris is translated into data, fed into the Unreal Engine and the image on the LED wall is affected accordingly.
At this point he explains an important word in his field — the frustrum, which refers to the area of the LED wall that the real camera is seeing at any given moment. “It’s really computationally heavy to generate all the date for the entire wall,” he elaborates. “Any way we can save on computational power, we do, and one important way by only generating data on the screen when and where we need to see it.”
Explaining his methods for lighting performers, sets and props in front of the LED wall, Wolford says he relies on a lot of techniques he learned for lighting the foreground elements on a greenscreen stage. The idea of interactive lighting that sells the effect and makes the live action elements seem to exist in a common space as the background effect is an equally vital aspect in both types of VFX cinematography.
“If I’m lighting a greenscreen [stage] for a J. J. Abrams movie,” he says, “they’ll send me a picture of the effects background” — a style guide, he says — “and I’ll be like, ‘OK they’ve got sun coming from the left, there’s a big fireball that’s going to happen on the right and it’s more a reddish fireball than an orange one… so I’ll put a key light from the left and a reddish effect light on the right to simulate the fireball.
“Now I just look at the LED wall,” he says, “and I have my style guide right there.”
At the conclusion of his presentation, Wolford took a series of questions. Answers revealed additional tidbits, including the fact that the wall the audience was watching utilized a 1.9-pixel pitch (the distance from the center of any one pixel to the center of an adjacent pixel is 1.9 mm. The smaller the number, the higher the resolution of the image). To contextualize, he explains, “You go to Coachella and see a big video wall. That’s going to be a 3.9 pitch. This is 1.9 pitch, so these pixels are very tightly packed together.”
Asked about Unreal Engine’s primary competitor, Unity, he said that Unity’s graphics engines have dominated the cell phone game area and that the company made a significant move into high-end VFX when they purchased highly-respected New Zealand-based VFX company Weta Digital (the lead VFX house for the Lord of the Rings films), but he stresses that Unreal Engine and parent company Epic Games, “Have been very aggressive in the video wall space, through training and teaching,” so while he’s aware of that some LED walls based on Unity technology exist, he says they are rare and he’s never encountered one.
It Was Just a Beautiful Dream: Virtual Production for “Live Again”
TL;DR
With the music video for the new Chemical Brothers single “Live Again,” Outsider’s Dom&Nic and Untold Studios used virtual production and real-time VFX to create a surreal “Groundhog-Day”-esque adventure story.
Outsider and Untold teamed with virtual production specialists from ARRI Solutions, Creative Technology and Lux Machina, with the shoot hosted on and powered by ARRI Stage London.
Filming lengthy takes against CG backgrounds that change in real time without the need to cut camera, the promo breaks fresh ground.
A new music video broke fresh ground by filming lengthy takes against CG backgrounds that change in real time without the need to cut camera.
“Live Again” is the tenth collaboration between British dance band Chemical Brothers and director duo Dom&Nic.
“It’s a trippy Groundhog Day-esque adventure story through multiple environments in a continuous dance performance by Josipa Kukor,” describes Promonews.
To achieve it they filmed long unbroken shots with background virtual environments switched live without edits.
“The woozy, wonky analog sounds and the dream-like lyric suggested a hallucinogenic visual journey following a character caught in a loop of death and rebirth. The hero in the film wakes or is reborn in a different environments ranging from deserts to nighttime neon city streets and cave raves to Martian worlds,” the directors told Promonews.
“This is an idea that could not really have been achieved with traditional filmmaking techniques. We created virtual CGI worlds and used long unbroken camera takes, without edits, moving between those different worlds seamlessly with our hero character.”
Dom&Nic’s production company Outsider brought together cinematographer Stephen Keith-Roach, production designer Chris Oddy and VFX facility Untold Studios, along with virtual production specialists from ARRI Solutions, Creative Technology and Lux Machina, all hosted on the ARRI Stage London.
Dom&Nic say that the band encouraged them to capture the feel of the track in the cinematic texture and look of the film.
“We were given the challenge to give it the visual equivalent of putting a clean sound through a broken guitar pedal to transform and degrade it into something unique. We love the way the film has an analog and messed up film look to it, it really adds to the visual trippy experience.”
Untold Studios real-time supervisor Simon Legrand added, “After designing seven bespoke virtual worlds in pre-production, we were then able to tweak elements on set, on the fly, giving the directors the freedom to play and experiment. This is the first time that virtual environments have been switched live on set in this way.”
Will Case, director of innovation at Creative Technology, confirmed, “It really pushed the boundaries of working in real-time workflows and technologies to bring to life Dom&Nic’s visually stunning promo.”
In an interview for the ARRI website, Dom&Nic described how they approached the project knowing that virtual production would be part of the mix. “Being immersed by ARRI Stage London and its walls of screens for the first time was very impressive,” they recalled. “You start wondering how to use the space and the technology to create a narrative that couldn’t be shot in any other way.”
The new technology helped inspire some of their ideas, but it also clearly demonstrated constraints demanding creative solutions, which ultimately led to the promo’s unique style. “Building a set that would transition for all environments was a creative challenge, but one that developed the idea further. For example, a desert floor could become a construction site in a city — once we had worked through that process, things started to tie together,” they said.
“Connecting the Unreal environments with the actor and set wherever possible while using whatever tricks and ideas we could imagine was important,” the duo added. “For example, CGI tumbleweed enters the frame, rolls around the physical set and then off into the 3D environment background. A black, disc-like object in the sky does the opposite: starting in the distance, in the Unreal environment, then right over the head of the actor in the set. Lighting was synced with the camera, and the black disc was also integrated as real-time VFX on the LED wall. This meant our actor could perform and react to the final image, which looked as ominous and convincing on the Stage as it does in the final film.”
Image courtesy of Will Case at Creative Technology
The primary challenge, however, “was taking an unbroken shot through different environments without cutting or using greenscreen. Pre-shoot, we used Unreal Engine to design a range of immersive environments, so we could work out the space for our physical set and get a sense of how our actor and props could be positioned.”
The directorial duo anticipated that it might be difficult to develop “a clear working translation between a traditional camera crew approach and the virtual production elements,” but that wasn’t actually the case. “Our DP, Steven Keith-Roach, worked with the Stage teams to light the scenes virtually, and with gaffer Kevin McMorrow to use practical and studio lights inside the caravan, which worked seamlessly.”
Given that LED panels have a softer light than natural daylight, the duo employed softer lighting setups that worked well on the physical stage and also helped the blend from set to screen, they said. “It was a very quick process to move the sun across the sky or pop it behind a cloud.”
The Stage’s wraparound design with real-time camera tracking and lighting represents “a great leap forward from traditional greenscreen,” Dom&Nic enthused, with “no edges or spill, and perfect reflections.”
The process for seamlessly integrating CGI and real-world image-based lighting for the promo’s actors, highly reflective set, props, and wardrobe represented the biggest advance, the duo said. “The fact that foregrounds and backgrounds are shot in-camera, with no need to composite later is the icing on the cake!”
AI Is Everywhere: Where Did It Come From, Where Is It Going?
TL;DR
Video Learning Lab host and technology advocate Gary Adcock discusses the evolution and context of AI, stressing on its pace and public awareness.
AI, as defined by Adcock, is a branch of computer science teaching machines to mimic human intelligence.
Adcock notes AI’s prevalence in everyday technology, from social media filters to Tesla’s Waymo system, emphasizing the role of user data in training these systems.
Noteworthy AI applications include stock-picking on investment platforms, enhancing security systems, and the potential for breakthroughs in medical diagnostics.
Adcock presents Coca Cola’s AI-generated “Masterpiece” commercial as a prime example of AI’s capabilities in creative endeavors.
Technology advocate Gary Adcock hosts a session in our new Video Learning Lab about the evolution of AI, with the terms he prefers: augmented intelligence and machine learning.
This video offers context and historical perspective to a topic that’s currently resounding throughout the world. It’s being met simultaneously with exultation about its possibilities and existential dread about the potential to perform tasks beyond human capability, as well as those of which humans are capable, but for which we may suddenly become totally unnecessary.
People have seen services such as ChatGPT and Midjourney evolve significantly since most people first noticed their existence earlier this year. In fact, while the technology seems to be moving at light speed, Adcock observes that the phenomenon that has moved the fastest of all is awareness of the technology.
So… what’s AI? Adcock summarizes the overarching concept as “a wide-ranging branch of computer science that allows you to teach, to use the developmental process [similar to what] we would use to teach children [but] to teach machines. And it sometimes appears like it’s ‘intelligent,’ but it’s not.”
While ChatGPT and Midjourney seemed to magically appear, Adcock emphasizes, everyone has had an ongoing relationship with AI at least since they started trade their likeness or data to do something with an app on their device.
“Look at the media filters,” he says. “I’m going to put sunglasses on, I’m going to have a cat face, I’m going to have bunny ears, I’m going to do makeup on my phone. All these social media filters are smart filters built on AI. They’ve been built on a system that says you want to thin out your face? To elongate your chin? To change your hairline?”
All of those things can be adjusted very specifically and a lot of them look quite real. And that’s because we’ve all been teaching the systems how to do what they do. Likewise, Siri and Alexa. We are providing massive amounts of data for these systems to learn from.
Tesla’s Waymo system is also artificial intelligence, he points out, with its massive amount of data processing whenever someone uses their Tesla. “I don’t think people are considering that. A Tesla has as many as 60 cameras in it shooting 4K/60fps material that’s being processed on NVIDIA cards underneath the battery,” primarily to provide the enormous amounts of data necessary advance the brand’s self-driving capabilities.
Other recent developments with AI can be seen with the TD Ameritrade and Robinhood investment platforms, which have performed experiments that seem to have had success using AI to pick stocks. Security systems are using AI to prevent corporate hacking, for military purposes and to enhance security.
But perhaps the biggest positive developments that Adcock spoke about involved medical field: “A radiologist may have looked at 40 or 50 tests that day. All the knowledge they’ve gained from the experience of seeing these tests might be based on looking at a few thousand of these throughout their career.
“Now, think about [a system] that looks at millions of images of screenings.” The ability to aggregate, compare and analyze such a massive amount of data requires this type of technology and is expected by many in the field to be able detect more anomalies far earlier than the most skilled and experienced physician.
Tying all this back to AI for image creation, Adcock screens Coca Cola’s “Masterpiece” commercial generated using a combination of ChatGPT, Midjourney and other similar tools. Set in a crowded art gallery it is full of unusual effects, including a lot of characters from famous paintings interacting with their product.
The kinetic, elaborate spot, he says, obviously took a lot time and money to develop, but he holds it up a very strong example of what the technology is capable of and suggests what it might be capable of in short order.
“Oppenheimer” and Technology’s Ethical Consequences
TL;DR
“Oppenheimer” offers lessons on the “unintended consequences” of technology, says director Christopher Nolan.
Emerging tech including quantum computing, robotics, blockchain, VR and AI are all black boxes with potentially catastrophic consequences if we don’t build in ethical guardrails.
Leaders need to understand that developing a digital ethical risk strategy is well within their capabilities and management should not shy away.
Beware of what we create, might be the message from Oppenheimer, on the face of it a film about the invention of the atomic bomb, but with obvious parallels to today.
Director Christopher Nolan might have had in mind the nascent cold war when he began the project but since then the Russia invasion of Ukraine and the rise of AI has given his film added resonance.
“When I talk to the leading researchers in the field of AI right now, they literally refer to this right now as their Oppenheimer moment,” Nolan said during a panel discussion of physicists moderated by NBC News’s Chuck Todd. “They’re looking to his story to say, okay, what are the responsibilities for scientists developing new technologies that may have unintended consequence?
“I’m not saying that Oppenheimer’s story offers any easy answers to those questions, but at least can serve as a cautionary tale.”
Nolan explains that Oppenheimer is an attempt to understand what it must have been like for those few people in charge to have developed such extraordinary power and then to realize ultimately what they had done. The film does not pretend to offer any easy answers.
“I mean, the reality is, as a filmmaker, I don’t have to offer the answers,” he said. “I just get to ask the most interesting questions. But I do think there’s tremendous value in that if it can resonate with the audience.”
Asked by Todd what he hoped Silicon Valley might learn from the film, Nolan replied, “I think what I would want them to take away is the concept of accountability. When you innovate through technology, you have to make sure there is accountability.
“The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”
There has to be accountability, he emphasized. “We have to hold people accountable for what they do with the tools that they have.”
Nolan was making comparisons between nuclear Armageddon and AI’s potential for species extinction, but he is not alone in calling out big tech to be place the needs of society above their own greed.
In an essay for Harvard Business Review, Reid Blackman asks how we can avoid the ethical nightmares of emerging tech including blockchain, robotics, gene editing and VR.
“While generative AI has our attention right now, other technologies coming down the pike promise to be just as disruptive. Augmented and virtual reality, and too many others have the potential to reshape the world for good or ill,” he writes.
Ethical nightmares include discrimination against tens of thousands of people; tricking people into giving up all their money; misrepresenting truth to distort democracy or systematically violating people’s privacy. The environmental cost of the massive computing power required for data-driven tech is among countless other use-case-specific risks.
Reid has some suggestions as to how to approach these dilemmas — but it is up to tech firms that develop the technologies to address them.
“How do we develop, apply, and monitor them in ways that avoid worst-case scenarios? How do we design and deploy [tech] in a way that keeps people safe?”
It is not technologists, data scientists, engineers, coders, or mathematicians that need to take heed, but the business leaders who are ultimately responsible for this work, he says
“Leaders need to articulate their worst-case scenarios — their ethical nightmares — and explain how they will prevent them.”
Reid examines a few emerging tech nightmares. Quantum computers, for example, “throw gasoline on a problem we see in machine learning: the problem of unexplainable, or black box, AI.
“Essentially, in many cases, we don’t know why an AI tool makes the predictions that it does. Quantum computing makes black box models truly impenetrable.”
Today, data scientists can offer explanations of an AI’s outputs that are simplified representations of what’s actually going on. But at some point, simplification becomes distortion. And because quantum computers can process trillions of data points, boiling that process down to an explanation we can understand — while retaining confidence that the explanation is more or less true — “becomes vanishingly difficult,” Reid says.
“That leads to a litany of ethical questions: Under what conditions can we trust the outputs of a (quantum) black box model? What do we do if the system appears to be broken or is acting very strangely? Do we acquiesce to the inscrutable outputs of the machine that has proven reliable previously?”
What about an inscrutable or unaccountable blockchain? Having all of our data and money tracked on an immutable digital record is being advocated as a good thing. But what if it is not?
“Just like any other kind of management, the quality of a blockchain’s governance depends on answering a string of important questions. For example: What data belongs on the blockchain, and what doesn’t? Who decides what goes on? Who monitors? What’s the protocol if an error is found in the code of the blockchain? How are voting rights and power distributed?”
Bottom line: Bad governance in blockchain can lead to nightmare scenarios, like people losing their savings, having information about themselves disclosed against their wills, or false information loaded onto people’s asset pages that enables deception and fraud.
Ok, we get the picture. Tech out of control is bad. We should be putting pressure on the leaders of the largest tech companies to answer some hard (ethical) questions, such as:
Is using a black box model acceptable?
Is the chatbot engaging in ethically unacceptable manipulation of users?
Is the governance of this blockchain fair, reasonable, and robust?
Is this AR content appropriate for the intended audience?
Is this our organization’s responsibility or is it the user’s or the government’s?
Might this erode confidence in democracy when used or abused at scale?
Is this inhumane?
Reid insists: “These aren’t technical questions — they’re ethical, qualitative ones. They are exactly the kinds of problems that business leaders — guided by relevant subject matter experts — are charged with answering.”
It’s understandable that leaders might find this task daunting, but there’s no question that they’re the ones responsible, he argues. Most employees and consumers want organizations to have a digital ethical risk strategy.
“Leaders need to understand that developing a digital ethical risk strategy is well within their capabilities. Management should not shy away.”
But what or who is going to force them to do this? Boiling it down — do you trust Elon Musk or Mark Zuckerberg, Jeff Bezos or the less well known chief execs at Microsoft, Google, OpenAI, Nvidia and Apple — let alone developing similar tech in China or Russia — to do the right thing but us all?
AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.
The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in anxiety about AI. For the past few months, executives and AI safety researchers have been offering predictions, dubbed “P(doom),” about the probability that AI will bring about a large-scale catastrophe.
Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.
You might ask how such existential fears are supposed to play out. One famous scenario is the “paper clip maximizer” thought experiment articulated by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.
A less resource-intensive variation has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.
Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.
Actual Harm
In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are overblown and misdirected.
Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic Bill Browder by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes — from high-tech heists to ordinary scams.
AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.
These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.
Not In the Same League
The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost 7 million deaths worldwide, brought on a massive and continuing mental health crisis and created economic challenges, including chronic supply chain shortages and runaway inflation.
Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also changed the calculations of national leaders on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.
AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.
Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.
What it Means to Be Human
Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
Not Dead But Diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.
The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”
The Medium, the Movie, the Message: IMAX for “Oppenheimer”
Watch: IMAX production on “Oppenheimer”
Director Christopher Nolan sought out a medium sufficiently powerful and immersive to present his historical epic Oppenheimer.
A devout believer that motion picture film is an essential component in creating a true cinematic experience, he chose to shoot the story of the scientist who oversaw the creation of nuclear weaponry in 65mm, 15-perf IMAX format. Learn more in the video above.
To clarify: That’s the same super large-gauge film stock that directors such as Quentin Tarantino (The Hateful Eight), Paul Thomas Anderson (The Master) and Stanley Kubrick (2001: A Space Odyssey) have used but with a crucial difference.
Instead of running the film stock through the camera with the width of the film perf-to-perf used to define the width of the frame and the height of the frame measuring five perfs, this format runs that stock past the lens on its “side,” with the height of the frame being defined from the perfs on the left to those on the right and the width of the frame extending to 15 perfs.
Watch: The image capture for Oppenheimer
This allows for a truly enormous frame that can be projected significantly larger and with much more fine detail than even the above-mentioned epics. It also uses approximately three times as much film between “action” and “cut” as traditional 70mm; that is in itself a prohibitively expensive medium for all but the most established filmmakers to even consider.
While most filmmakers, even diehard lovers of celluloid, would shoot the black-and-white portions of a movie in 2023 in color and create the monochrome version during the digital color grading, that simply wasn’t appropriate to Nolan.
The fact that there was no black-and-white negative stock manufactured in the 65mm format required didn’t slow him down, either, and he arranged to have Eastman Kodak manufacture their black-and-white negative stock in 65mm gauge just for this project.
Watch: The trailer for Oppenheimer
As reported by Rochester First, from Kodak’s home town, “part of the movie was shot on custom-made film by Kodak: black and white large format IMAX film.
“Nolan is one of our biggest customers,” says Commercialization Manager at Kodak, Diane Carroll Yacoby. “He engaged Kodak right from the beginning. He had several ideas he wanted to try, one of which was to capture some of the scenes in Oppenheimer in black and white large format negative, which unfortunately, we did not have available.”
The task of making the Double X negative in the larger gauge involved some serious revamping of Kodak’s manufacturing plant in order cut the rolls of emulsion and perforate it to create sprocket holes for this format for which Double X had never been sold.
“It’s so cool … bringing your friends and family [and saying] ‘We made this film, I remember when this was going through the whole process,’” enthuses Operations Manager of Film Finishing, Kristen Taglialatela in the article.
As viewers of IMAX specialty films know, the medium is a powerful way to present giant, sweeping landscapes, “but I got very curious to discover this as an intimate format,” cinematographer Hoyte Van Hoytema recalls.
When shooting in the format, he says, “The face is like a landscape. There’s a huge complexity and a huge depth to it.”
The history of the title character’s role in transforming nuclear warfare from a possibility to reality, says Nolan, “is one of the biggest stories imaginable.
“Our film tries to take you into his experience. And IMAX, for me is a portal into a level of immersion that you can’t get from any other format.”
van Hoytema told Collider that Nolan is very much dedicated shooting scenes with a single camera, the old school way.
Of course, the expense of running multiple 15-perf 65mm for every take would likely be prohibitive even with the type of budgets he gets for his movies, but it’s about more than that.
The cinematographer elaborates that the camera is like the “magic box” on set that everything is directed towards. All the action, “‘is sucked into that one little box so that one camera really becomes an epicenter of our shoots. As soon as you put two cameras on the set, that that attention gets somehow divided.'”
Additionally, the director is not one for hanging around the video village. Nolan is almost always found directly to the side of the camera during a take. “‘The actors know exactly towards where they’re working,'” the DP says, adding that this is also true for ‘”the production designers, the set dressers and us in lighting…it all has to [aim] towards that one direction.'”
The IMAX cameras, in addition to their size, present some challenges for shooting, especially when capturing the type of close, intimate compositions that make up the majority of the film. van Hoytema explains that each frame of 15-perf, 65mm stock, “‘is a huge piece of exposed film. So, 24 of those frames … pull pulled through the camera per second! You can imagine how much power and inertia and how big the motors are that you need in order to do that, and how aggressive your mechanism has to be to…stop that frame'” to achieve the intermittent motion necessary for a film camera to work.
This is why “‘that camera is so loud, and it’s so bulky. It’s just physically very heavy.'” And while there is a blimp available that will dampen the sound (likened to that of a lawnmower), the device itself a rather enormous apparatus which van Hoytema refers to as the “coffin.”
Another factor comes from the unique attributes necessary for the optics. Lenses designed to cover such a large image area will, as a rule of physics, have to have a longer focal lengths to capture the same frame as a lens designed for a traditional sized frame. For example, if a filmmaker wants to capture a head-and-shoulders shot of an actor with the camera six feet away to focus that image onto a 35mm 1.85:1, they will require a much longer lens to get the same head-and-shoulders framing with the camera at the same distance to the subject.
All else being equal, longer focal lengths have shallower depth of field to work with and longer lenses are trickier to design for very close focus than wider lenses.
As a piece in Vulture notes, “shooting close-ups in IMAX was harder, technically, than capturing the vistas of Los Alamos, N.M., where Oppenheimer oversaw the creation of the atomic bomb as part of The Manhattan Project. van Hoytema usually relied on tight 80mm lenses for close-ups, but he needed to get closer than six feet for greater intimacy. With no available lenses,” up to the task “Panavision lens specialist Dan Sasaki supplied and adapted [medium format still] Hasselblad, Panavision Sphero 65, and Panavision System 65 glass specifically for the purpose.”
Nolan wanted to avoid CGI for the film, despite the fact that some of the most powerful moments involve abstractions, such as imagery of molecules and incendiary moments of atomic bomb testing, particularly the climactic Trinity Test, which the title character and his fellow scientists set off leading to proof that their atomic bomb would *spoiler alert* both work as expected and not set the earth’s atmosphere ablaze. It’s obviously a key scene in the movie and Nolan wanted to put it together with practically acquired imagery.
“Pushing the Button:” Recreating the Trinity Test for “Oppenheimer”
An interview in American Cinematographer covers quite a bit about the specialized glass Panavision made for Oppenheimer, and takes a particularly deep dive into a pair of lenses that had to be specially made to capture the practical effects that would make up elements of that bomb test sequence. The fascinating interview with Dan Sasaki, Panavision’s vice president of Optical Engineering and Lens Strategy, goes into the creation of these custom optics.
“Initially,” Sasaki recalls, van Hoytema “really couldn’t say what he was trying to do, because any project with Chris Nolan is generally very top-secret. All he said was that he wanted a waterproof probe lens that would focus very close, to cover whatever format we could. Then we said, ‘Well, what camera?’ And he said, ‘Can you make it work for IMAX?’ We told him it was going to be a challenge, but then I remembered we built large-format probes for the airplane cockpits in [Nolan’s feature] Dunkirk. Then there was talk about photographing particles with a probe submerged in water.”
As the discussion evolved, Sasaki recalls the cinematographer doling out a bit more info and a Panavision optical expert realizing. “’Oh, you want a microscope.” And he goes, “Yeah, a wide-angle microscope for IMAX.” We came up with a proof of concept for [both the 5-perf] 65mm and IMAX cameras.
‟It had all to do with the fact that we wanted to see physics, we wanted to be within the world of atoms and, of course, we couldn’t build lenses the size of atoms!” van Hoytema explained to British Cinematographer.
They then set out to design both a 24mm and a 35mm version of this microscope lens, first tackling the 35 as they knew the 24mm would be more difficult. After more testing, Sasaki recalls, “‘Hoyte asked us for closer focus and to make sure it [so it can] can go at least nine inches below the water’s surface. Our next step was to make it waterproof and set the lens stops.
“Initially, they were testing the probe with a waterproof membrane in the side of the tank that limited the diameter of the relay, but because the depth of field was so shallow, he was working at deeper stops, which accommodated smaller glass elements and shrunk the size of the probe.'”
As with any project of such complexity, the initial prototype had some issues related to the waterproofing and other issues. Sasaki’s team also realized that the calculations they’d made for the extremely fine close focus abilities of (one of the lenses needed to focus down to roughly 1mm!) were off because the density of water itself had a small but perceptible effect on the ability to calculate precise focus. So, to address that, they built “‘an intermediate optical surface, similar to a snorkel to separate the lens itself from the water.
“Hoyte is one of those people you’ll do anything for because every project he touches is amazing,” Sasaki enthuses. “He’s also technically astute — he has his own machine shop, and he builds his own parts, so he’s very understanding of the process and gives us the lead time to do things right. He’s hands-on, so it’s not, ‘here’s what I want’ and then comes back six months later. He gets involved.”
“We created science experiments,” the DP told IndieWire. “We built aquariums with power in it. We dropped silver particles in it. We had molded metallic balloons which were lit up from the inside. We had things slamming and smashing into one another such as ping-pong balls, or just had objects spinning.
“We had long shutter speeds, short shutter speeds… negative overexposure, underexposure. It was like a giant playground for all of us,” the D.P. recalls of these practical effects shots.
van Hoytema’s enthusiasm for this format is evident in his recent interview with British Cinematographer: “IMAX is constantly innovating, and they’re constantly helping us solve problems or make those cameras better. I always compare that camera to, in a way, a Formula One car. It needs a lot of care and a lot of love. But anything that gives images like that, you wouldn’t expect less. It’s not an off-the-shelf thing that just shoots school pictures. In order to get to that very specific level, it just needs service and a very meticulous guidance.”
“I’ve seen Oppenheimer twice, in digital and 70 mm IMAX,'” he writes. “Both times, as it turns out, in the exact same auditorium. So, while I can’t speak to the full range of formats in which the movie is being exhibited—standard and 4K projection, laser and xenon IMAX, not to mention 35 mm and non-IMAX 70 mm film—I can say precisely how much of a difference seeing it in this most rarefied of formats brings to the process, and how much is just hype.
“Oppenheimer is going to look spectacular in any of them…But where most IMAX movies are solely interested in exploiting its capacity for spectacle—making the big things bigger and the loud things louder—Oppenheimer is up to something different. While the Trinity test does fill the theater floor to ceiling with a cloud of nuclear fire, the movie is largely driven by conversations.
“By blowing those conversations up so much larger than life, to the point where Cillian Murphy’s eyes are not just the color but the size of swimming pools, Nolan underlines how seemingly mundane or undramatic events, the kind movies often don’t even bother with, can have absolutely massive consequences.
“The showstopper, so to speak, is the Trinity explosion itself, but the movie is dotted with arresting images all along, some of which remain abstract or ambiguous until the closing moments: the ripples of raindrops in a pond evolve into the blast radii of nuclear detonations covering the globe; an ethereal vision of clouds that we later realize are the ghostly trails of missile launches. And that’s where the added power of the 70 mm IMAX format, which offers more than four times the resolution of the best commercial digital projection, really comes in.”
“Oppenheimer” editor Jennifer Lame, ACE speaks with “The Rough Cut” host Matt Feury about working with writer-director Christopher Nolan.
July 12, 2023
Virtual Production: Expectations vs. Reality
Image courtesy of Stargate Studios
TL;DR
Sam Nicholson, founder and CEO of Stargate Studios, was interviewed at the 2023 NAB Show by Erik Weaver, director of ETC’s Adaptive Production project.
Nicholson says virtual production is superior to green screen but location work can still be better than a stage
We have a lot to learn about how to light in a volume, he says. Nicholson also warns against directors changing their mind about a volume shot after the event and instead to plan better ahead of time to avoid costly reshoots or post work.
We’ve only just scratched the surface of how to use light from virtual production displays, says Sam Nicholson, founder and CEO of Stargate Studios.
Lighting the scene using LED screens is a big top of discussion in the unions right now, he says. Because it is technology that is neither designed as a light but is not purely a screen since scenes can be illuminated with it.
“I’m not going to take sides in that argument. But that’s something that we’re dealing with both playback and lighting right now, and trying to define them. It’s a new technology that’s right in between, nobody knows what to do with it.”
Nicholson is one of Hollywood’s leading virtual production and visual effects creatives. With nearly forty years as a visual effects supervisor, DP, director, and now virtual production supervisor, Nicholson and his company, Stargate Studios, have combined the latest in LED technology with their proprietary “ThruView” lighting system.
Interviewed at the 2023 NAB Show by Erik Weaver, director of ETC’s Adaptive Production project, Nicholson explained that VP is the process of capturing the real world and making it usable in such a way that it’ll play back in a volume.
“Virtual production is fabulous. [But] if you can afford to go to Rome to shoot live action, and get some great pasta, and have a great time, you know, go to Rome. If you can’t, then send a small team to capture Rome, the Vatican, and bring the data back [to your volume], and now you can control the situation.”
In his presentation, Nicholson talked about his journey with creating in-camera VFX, including working with legendary Doug Trumbull on Star Trek: The Motion Picture. He described creating the effect of a 60-foot high column of light shot in-camera and in real time on stage for director Robert Wise.
He worked through the green screen period, which he described as one where “basically the crew would get a lobotomy going in. Nobody knows what it’s going to look like. How do you light it? How do you light for daylight if you can’t see daylight behind me? Where’s the sun? Green screen was very messy, because all the actors didn’t know which way to look. Really bad for the actors and very difficult for a director of photography, and very frustrating for the director.”
He later worked on the groundbreaking TV series 24, elements of which were shot on green screen.
“It was kind of a game changer, because all of a sudden, they said it’s a lot cheaper to bring the location to the actors than taking the actors to the location. It’s very difficult to shoot in Washington DC. when you’ve got to get a permit from one [multiple] groups to shoot anywhere.
“With green screen the actors didn’t have to be out all night. But dammit, we hate being on green screen. I mean, we had Kiefer Sutherland, like, walk off the set because he hated shooting on green.”
Virtual production solves a lot of these problems, giving actors a cue as to the environment they are in. But it is still very early days in the technology’s development.
Covering dialogue scenes with a shallow depth of field on a longer lens is ideal, but the wider the lens the more difficult it gets.
In addition, and perhaps the biggest challenge, is that virtual production isn’t as flexible as you might be led to believe. If a director changes their mind about a shot after the event it remains difficult to fix that shot with all the backgrounds baked-in in post.
“If you have a director who doesn’t know what they want or you’re on a short prep schedule, don’t try to do virtual production because you’ll get burned. Be really aware that you don’t have an alpha channel. There is no matte. So you’re gonna wind up with a big old rotoscoping bill if you change your mind.”
His advice? Prep, prep and prep. “Virtual production is not a panacea. It’s a great new tool that does certain things like reflective objects really well. But it does other things horribly, like changing your mind.”
Film and TV productions are increasingly using LED walls for virtual production, leading to an industry-wide demand for understanding the best LED panels available.
Two industry experts, color scientist Tucker Downs and virtual production display specialist Ritchie Argue, undertook an eight-month project to rate and analyze 12 panels from eight manufacturers. They assessed each panel’s ability to display test materials in terms of contrast and color rendition throughout their entire brightness scale.
Downs and Argue didn’t reveal the make or models of the units they tested, but they did provide a detailed explanation of their methods and testing points. They emphasized that their examination provides a far more accurate prediction of a panel’s effectiveness on a real set than any single numerical value.
The panel evaluation process was specifically designed for on-camera use. Panels that look good to the naked eye may not perform well on camera, so the experts aimed to determine whether a display performs linearly and whether the colors are going to match from one shot to the next.
As film and TV productions expand their use of LED walls for virtual production, more industry pros naturally are seeking out the “best” LED panels available. Mega-budget productions will undoubtedly obtain the best there is, but those with just enough budget to get into VP are very eager to understand the trade-offs among the many options currently available in order to make an informed decision.
This is a more complex process than it might initially seem to be. There are certain criteria for these panels that can be expressed by a simple number — pixel pitch, or the resolution of the image it can display, can be expressed simply, as can CRI (Color Rendering Index), which quantifies the accuracy and purity of the color and color temperature a particular panel emits overall.
But when people use an LED panel in front of the camera with the objective of blending the image it’s displaying seamlessly into a live-action shot, the tolerances must be tighter; the panel must be capable of displaying imagery as close to perfectly rendered as possible, and measuring all the relevant factors is a complex job requiring time, a full complement of perfectly calibrated tools, and a very high level of expertise.
Fortunately for us, color scientist Tucker Downs and virtual production display specialist Ritchie Argue brought all the above to their eight-month project taking a highly-specialized, in-depth approach to rating 12 panels from eight manufacturers to measure each one’s ability to display the same test materials in terms of contrast and color rendition throughout their entire brightness scale, from absolute black to absolute white and at an extremely large number of points in-between. And they presented their methodology and results in a session at the 2023 NAB Show entitled “Advanced Color Analysis Methods for LED Walls.” Watch the full session in the video at the top of the page.
Spoiler alert: While Downs and Argue rate 12 panels, they don’t reveal the make or models of the units they tested, so there’s no table of best-to-worst in their presentation. But they do offer an in-depth explanation of each and every data point they put to the test, their methods, and the reasons why their extensive examination is far more predictive of how effective a panel will be on a real set than any single number can provide.
“We’re not the police of panels or recommending a given display,” says Argue. “What we are recommending is specific target values… to achieve for use in this space. And we’re also not looking to advise vendors on how to resolve implementation issues or improve their panels. That’s their secret sauce. We just want to see how to evaluate their secret sauce and make sure it’s doing what we’re expecting.”
This entails measuring performance specifically for on-camera use. “This is not a traditional use for LED,” Argue adds. “We really wanted to dig in and see what’s necessary to get the most performance out of them. We determined some key characteristics or metrics for how to think about LED in on-camera.”
Downs notes that panels that look good to the naked eye can yield inferior results, while some that might not look as good to someone standing on a set do read better on-camera. The goal was to “drill down numerically, objectively, and figure out if a particular display performs linearly… and are the colors going match from one shot to the next shot, to next week, to the next production?”
Methodology involved a test pattern generator to generate 10-bit test patterns, which they then sent through the full video pipeline and measured the outgoing color, and they investigated perceptible color differences for three attributes: hue, chroma, and lightness.
The two experts consistently point out that the numbers that are generally applied to LED panels’ color fidelity simply aren’t specific enough to convey suitability for such specialized applications as VP. You can look at the gamut coverage that says a panel is say 95% of the P3 color space, says Downs, but warns, “but there can be two tiles with the same coverage of P3, but one is more consistent than the other.”
Collaborative Competition: The State of the Industry for Virtual Production
TL;DR
A group of industry leaders gathered to present the “Gaijin Collective State of the Industry Report” focused on the evolution and growth of virtual production.
The panel highlighted the shift towards openness, collaboration, and the sharing of information between competitors as key to fostering the VP ecosystem.
Because the industry is still young — and highly specialized — educating clients about VP outweighs aggressive competition, the panelists agreed.
Balancing commercial work alongside educational partnerships will help ensure the continual development and adoption of VP technology.
Four prominent players in the world of virtual production shared their insights into the rapidly growing field at the 2023 NAB show. The “Gaijin Collective State of the Industry Report” included A.J. Wedding, co-founder of Orbital Virtual Studios; Tim Moore, CEO of Vū Technologies; J.T. Rooney, President of XR Stages; and Erik Weaver, director of adaptive production at the ETC (Entertainment Technology Center) at the University of Southern California. The panelists discussed the OSVP industry, opportunities for growth, and the importance of sharing information among competitors in order for the ecosystem as a whole to thrive.
The participants agree that VP has changed since the LED-wall based systems came on the scene. “People were more closed off,” recalls Rooney, whose work has primarily focused on live entertainment. “It is now growing into a real industry, and with that comes relationships and friendships. There’s a real difference between popping up [an LED] wall and making something work day in and day out.”
Wedding, who has been involved in the film, post and VFX industries for 20 years, notes that openness is not only not the antithesis of competition. In fact, he says, it can be the opposite. There is so much for most people to learn about what VP actually is that it can be far more important to educate prospective clients about the process overall than it is to aggressively compete.
Wedding breaks down how, even among each of the speaker’s businesses there are unique specialties. “They can be specialized as more designed for cinematic work or broadcast or XR (extended reality) and AR (augmented reality), which is different from projects based around a live theater performance,” he said.
“We’re both in Los Angeles,” Wedding, whose company did work on the FX series Snowfall, adds, “but whenever somebody comes to me and wants to do XR, I send them to [Rooney’s XR Stages]. I’m not going to mess with that! It’s a whole separate thing, even though we all have LED walls and tracking solutions.”
Rooney points out that VP is still such a young technology that there’s no agreement yet about standard terminology. “If you talk about virtual production, you’re using one name to describe a workflow across something like 10 different industries,” he says, noting that even what we refer to simply as “production” actually comprises many different things. “Does that mean a single-camera shoot from inside someone’s basement in Cleveland or what happens on a Hollywood set? It’s all ‘production,’ but there are a million ways to talk about it.”
Another sub-specialty of VP, what Orbital refers to as “digital prosthetics,” involves what Wedding defines as a “‘live deepfake.’ It’s somewhat controversial but also very helpful in certain circumstances. If you’re going to do alien makeup which would require that an actor sit in a chair for nine hours or the actor can just walk on stage and perform.”
“You have to learn on the job in real time,” Weaver chimes in. Further, it’s hard to formalize workflows and nomenclature, when, as Rooney says, “every single vendor and every single piece of equipment we have in the studio is either a beta or an alpha version of something — firmware, software.”
Weaver suggested stages target 80 days a year for commercial work and giving away 20+ plus days a year to educational partnerships, especially with universities.
The dissemination of useful information, whether it comes from the people who run the VP studios or through education partnerships, is according to Wedding, “part of keeping the industry going. If people stop using virtual production because they had a bad experience, then we’re all screwed.”
It’ll Be a New (New) Media Experience in the MSG Sphere
TL;DR
The Sphere in Las Vegas is an experiential medium featuring an LED display, sound system and 4D technologies that require a completely new approach to filmmaking.
U2 is headlining the Sphere’s opening nights and Darren Aronofsky has made the first film in the patented Big Sky format.
Will the custom nature of the technology prove more of a straightjacket than a freedom to creatives?
More than just another giant screen or an upgrade to 4D cinema, the latest Las Vegas attraction is being touted as a new experiential entertainment format.
“We are redefining the future of entertainment through Sphere,” MSG Entertainment executive chairman and CEO James L. Dolan says. “Sphere provides a new medium for directors, artists, and brands to create multi-sensory storytelling experiences that cannot be seen or told anywhere else.”
“This will be a quantum leap forward in the sense of what a concert can be,” U2’s The Edge told Andy Greene at Rolling Stone. “Because the screen is so high-res and so immersive, we can actually change your perception of the shape of the venue. It’s a new genre of immersive experience, and a new art form.”
U2 will be the opening act for the Sphere on September 29, the first of a largely sold out 25-date residency running through the end of the year.
The developers of the 366-foot-tall, 516-foot-wide dome are aiming to reinvent every aspect of the live event experience and is the culmination of seven years of work, with a budget that reportedly stretched to $2.3 billion.
Virtual reality without the goggles was the elevator pitch, MSG Ventures CEO David Dibble recalls to Rolling Stone.
“We thought, ‘Wouldn’t it be great to have VR experiences without those damn goggles?’ That’s what the Sphere is,” says Dibble.
It had to have the world’s highest resolution screen, and so it does at 16K by 16K. There is no commercial camera capable of recording at that resolution without having to stitch together images from a camera array. So MSG Entertainment built its own camera system and a whole postproduction workflow, which together comprise a system it calls Big Sky.
The Big Sky single lens camera boasts t316-megapixel sensor capable of a 40x resolution increase over 4K cameras. It can capture content up to 120 frames per second at the 18K square format, and even higher speed frame rates at lower resolutions.
They designed a custom media recorder to capture all that data including uncompressed RAW footage at 30 Gigabytes per second with each media magazine containing 32 terabytes and holding approximately 17 minutes of footage.
According to David Crewe at Petapixel, who saw the tech first hand, “since the entire system was built in-house, the team at Sphere Studios had to build their own image processing software specifically for Big Sky that utilizes GPU-accelerated RAW processing to make the workflows of capturing and delivering the content to the Sphere screen practical and efficient. Through the use of proxy editing, a standard laptop can be used, connected to the custom media decks to view and edit the footage with practically zero lag.”
Specialist lenses have been built, too, including a 150-degree field of view, which is true to the view of the sphere where the content will be projected, and a 165-degree field of view which is designed for “overshoot and stabilization” and is particularly useful in filming situations where the camera is in rapid motion or in a helicopter.
The 164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience. This was designed by German start-up Holoplot following investment in the company by MSG.
According to Rolling Stone, the patented audio technology they created allows them to beam waves of sound wherever they want within the venue in stunningly precise fashion. This would allow, for example, one section of an audience to hear a movie in Spanish, and another side to hear it in English, without any bleed-through whatsoever, almost like fans are wearing headphones. “It can also isolate instruments,” says Dibble. “You can have acoustics in one area, and percussion in another.”
The venue can seat 17,600 people, and 10,000 of them will be in specially designed chairs with built-in haptics and variable amplitudes: Each seat is essentially a low-frequency speaker. There’s also the option to shoot cold air, hot air, wind, and even aromas into the faces of fans.
“There’s a noise-dampening system that we used in the nozzles of our air-delivery system that NASA found really interesting,” Dibble tells Rolling Stone. “They were like, ‘Do you mind if we adapted that for the space program?’ We went, ‘No, knock yourself out.’”
Director Darren Aronofsky (The Fountain, The Whale) was commissioned to shoot Postcard From Earth, the first piece of cinematic content for the Sphere with the Big Sky camera wielded by Oscar-nominated cinematographer Matthew Libatique.
“At its best, cinema is an immersive medium that transports the audience out of their regular life,” Aronofsky told Carolyn Giardina at The Hollywood Reporter. “The Sphere is an attempt to dial up that immersion.
He added, “Like anything, there are some things that Sphere works particularly well with and others that present new problems to solve. As different artists play with it, I’m sure they’ll find innovative ways to use it and affect audiences in different ways.”
He adds, “We just recently figured out how to shoot with macro lenses and we filmed a praying mantis resting on a branch. Imagine what that may feel like when we present it 20 stories high.”
The venue could house events like Mixed Martial Arts and will also be a centerpiece of the Formula One grand prix in October. MSG has announced plans to build similar venues in London and elsewhere.
It is too early to say but perhaps the highly bespoke nature of the venue and the workflow required to produce “experiences” for it may work against it. Will the technology prove more restrictive than flexible?
The Edge made this comment to Rolling Stone: “Unfortunately, because of the amount of time and expense in creating some of these set pieces visually, it’s quite hard to be as quick on our feet and spontaneous as we might have been on other tours.
“But we still are determined that there will be sections of the show that will be open to spontaneity.”
“Kagami” is a mixed-reality presentation showcasing the performance of late composer Ryuichi Sakamoto created by Tim Drum and directed by Todd Eckert.
The Japanese Academy Award-winning composer for “The Last Emperor” died earlier this year, but has been resurrected as a hologram via mixed reality goggles for limited theatrical runs.
Wired writer Elissaveta Brandon judged the result “an experience that feels material and ethereal at once.”
The latest art-meets-tech experience presents the late Ryuichi Sakamoto in a digital concert.
The Japanese Academy Award-winning composer for The Last Emperor died in March at the age of 71, but he has been resurrected as a hologram for a special theatrical experience viewable in situ at New York’s The Shed using mixed reality goggles.
Called Kagami, which translates to “mirror” in Japanese, the production was a five-year work in progress with Sakamoto and is directed by Todd Eckert, a former content executive at Magic Leap.
Following its debut at The Shed, the show went on tour with scheduled stops at the Big Ears Festival in Knoxville, Tennessee and the Manchester International Festival in the UK, continuing in 2024 to the Sydney Opera House.
Sakamoto’s performance of 10 solo piano pieces was recorded in a volumetric capture system, but the artist died before the project was completed at mixed-reality studio Tin Drum.
According to Wired, it took about six months for Eckert’s team to process the raw data captured in the session. In the meantime, they also had to sculpt Sakamoto’s hair from scratch and recreate his iconic glasses. But the most challenging thing to recreate was Sakamoto’s face, which was blocked from the cameras for a large part of his performance because he was hunched over. The team had to make up for the missing data by reconstructing his face using referencing segments of complete data.
What of the 45-minute show itself? The show begins with 80 people sitting in a circle around absolutely nothing. After each concertgoer has slid on a Magic Leap 2 headset, a virtual Sakamoto appears in the center of the circle. The musician then performs while guests can move “around” him.
Forbes writer Charlie Fink attended and reports that when Sakamoto’s hologram, ends a piece, there is silence. “One person claps before they realize this is not a live performance.”
In one piece, a tree grows out of the piano and its roots go far beneath it below the floor, Fink describes. It dissolves into stars, the milky way, and soon we find ourselves standing above the earth as seen from space. In another composition we’re surrounded by iconic New York images: skylines, bridges, unexpected terracotta lions. Other times, Eckert shows the viewer black and white warplanes flying in formation and other iconic World War images.
Fink says he shed a tear, and Wired’s Elissaveta Brandon judged the result “an experience that feels material and ethereal at once.”
Brandon found the headset’s 70-degree field of view limiting and also tricky to get used to, “considering you’re walking around in a room full of strangers wearing the equivalent of dark sunglasses.”
She ponders whether a mixed-reality concert can ever make up for the sad reality of someone’s absence. What do we lose when technology becomes so integral to the most intimate of human experiences — and what do we gain?
“Replicating the unique energy and atmosphere that stems from the collective presence of performers and the audience being in the venue together is another challenge,” she writes.
“This is only partly addressed in Kagami, which may be designed for a small crowd to take in together, but remains a mostly solitary experience.”
Your Eyes vs. Frame Rates: What You Can (and Can’t) See
TL;DR
Frame rates have become a hotly contested issue in recent years, with strong arguments challenging the nearly century-old standard of 24fps for movies or up to 60fps for TV.
Scientist and filmmaker John Hess analyzes these claims, delving into some serious concepts related to human vision from a biological and psychoacoustic perspective.
Hess supports the 24fps standard for narrative filmmaking because it is the look that has become part of the traditional culture of cinematic storytelling. He also finds many of the supposedly scientific arguments for much faster refresh rates flawed.
In recent years, frame rates for movies and television have become a hotly contested issue, with strong arguments from many quarters challenging the nearly century-old standard of 24fps for movies or up to 60fps for TV, suggesting that these represent anachronistic restrictions on the much faster frame rates (or refresh rates) possible in top-tier videogames running off powerful hardware.
Some propose that higher frame rates would look aesthetically more pleasing, others say that the look of material presented at higher frame rates looks more like “real life,” absent the artifacts of traditional refresh rates such as the motion blur and juddering effects that we’re used to seeing on screens but don’t perceive in life.
While there is certainly a large contingent that feels that the traditional speed, particularly 24fps, offer the aesthetic look of what the world has come to think of as “cinematic,” and that faster frame rates suggest sports coverage, a “soap opera effect” or a videogame look, there is also a fierce group which argues that those things are only negatives to people with a sentimental attachment to the images that derived from technical limitations of the past.
Assuming that YouTube’s algorithm brought that video to the attention of the large community with strong opinions on the topic, he notes that this video received a number of views (more than half a million) far beyond what he’s used to and posted far more comments (9,000 and counting) than his videos normally attract.
Since then, he made two videos where he seriously analyzes the scientific claims he’s seen in those comments and elsewhere, delving into some serious concepts related to human vision from a biological and psychoacoustic perspective.
With “What is the Frame Rate of the Human Eye?” and “What Frame Rate is Needed to Simulate Reality?” he unpacks what it really means to suggest that we see in frames and the way a camera works. He references scientific work suggesting that dependent on an array of factors, an argument could be made that we “see” at roughly 10fps while another equally persuasive set of facts suggest that for a motion picture to truly present movement completely devoid of the kinds of artifacts we perceive in a normal cinematic presentation, it would need roughly 20,000fps to get there.
“There are a lot of kids saying how they can spot some super high ridiculous framerate,” he notes. “But the truth is the human eye doesn’t work that way. Humans are analog creatures. We don’t sample or quantize or pixelate our vision.”
The idea that someone claims to see at 144 frames-per-second or even higher, he says, is about “the fetishization of technology. The eye and the connected human visual system have no real frame rate, at least not the way we understand in terms of video and motion picture film.”
He sets out to debunk what he says are some unfortunately popular myths: “Myth number one: Fighter pilots are able to identify enemy aircraft when exposed to only an image flashed at 1/220th of a second so therefore the eye can see at least 220fps.” Hess says this idea has been floating around internet forums on the subject for so long that he’s unable to trace if it comes from any real study and, if it did, exactly what the number is.
He readily acknowledges that fighter pilots, by selection and training, have exquisite vision and well-honed reflexes, but, he says, “Just because you can see a flash at a short duration, that doesn’t mean your eye has the ‘frame rate’ equal to the speed of the flash. Using a camera recording 24 frames-per-second, we can detect and record and 800th of a second flash. In fact, I can detect a flash using a camera running any frame rate per second so long as the shutter is open during the flash… So long as the photons are registered by the camera sensor when the shutter is open, the sampling rate or the frame rate doesn’t matter.
“Being able to see a flash does not anyway determine the frame rate of the recording medium or the frame rate of the human eye.”
“Myth #2: I can tell the difference between a 60 hertz display and 144 hertz display, therefore the eye is at least capable of seeing 144fps. But that says more about the way motion pictures are created on a screen than it does about human biology.”
He points out that the way people generally first notice the refresh rate of a monitor is by wiggling their mouse around. “But here’s the key point: when you move your cursor around the screen, nothing actually moves.” The motion of a screen is instead a rapid succession of still images, each a little different than the previous one. This is called apparent motion by psychologists.
When you wiggle a cursor around the screen very quickly, the images of the mouse create a phantom array of cursors, with 60Hz displaying 60 instances of the cursor generated every second, and 144Hz displaying 144 instances of the cursor created every second. You can absolutely tell the difference between 60 and 144 cursors-per-second. But before you congratulate yourself for such an observational feat, he says, “so can a camera shooting 24 frames-per-second.”
In short, he sums up: “The human eye does not have a framerate.” But there are some important numbers he wants to talk about.
The first one is 10Hz, or 10 frames-per-second. Even before movies became a thing, he explains, psychologists found that 10 to 12 frames is where the phenomenon of apparent motion begins. “After that, the succession of images becomes apparent motion, whether that’s 15 frames, 24 frames or 2,400 frames-per-second.”
After expanding on the psychologists’ work that arrived at that result and experimentation that showed that other effects of perception indicate that we tend to see multiple images as one continuous image at about 10 examples a second but not less, he admits that 10Hz is clearly not a desirable frame rate (given flicker and other artifacts) and he re-iterates that “human vision has no framerate” and notes that the effect of apparent motion, even kicking in at that rate, isn’t necessarily sufficient for making motion pictures.
“Unlike a camera,” he points out, “the cells in your eyes aren’t attached to a master clock that syncs and samples all the retinal cells all at one time.”
The rods and cones in our eyes, which simplified detect movement and color, respectively, do not all perceive visual information the same way or at the same speed. The fovea right at the center of a human’s field of vision absorbs far more detail, but more slowly, than the surrounding cells, which work more quickly but at far less resolution. To complicate matters more, the cones receive information more rapidly, but require more light than the rods.
Hess delves into some visual illusions such as strobing light sources that seem continuous to the eye and the famous “wagon wheel effect,” which can create the illusion we’ve all seen in movies of a spinning wheel that seems to be turning opposite from the direction it clearly must be moving.
He also he cites a number of experiments undertaken to attempt to explain the illusion that many have while on psychedelics that lights seem to be streaking, and how these and other effects relate to the unique way our visual systems actually work.
In the second video, he expands these ideas further. Here, instead of talking about apparent motion kicking in at 10Hz, he explores another point. Asking people to imagine a small object moving across a large screen, he explains, “In order to appear perfectly smooth and eliminate all stutter, each frame must move the image by no more than the smallest Angular change humans can perceive.
“Say you’re sitting in a movie theater near the front of the middle of the hall. So, the screen occupies 55 degrees of your field of view. If you had a small dot that travelled from one side of the screen to the other, you would need 3300 frames-per-second to make that dot appear to move perfectly smooth across the screen in one second, as 55 degrees divided by 1/60th of a degree. If you want to move it in half a second [perfectly smoothly], you’d have to double that to 6,600 frames-per-second.” There is no theoretical upper limit, he says, but he suggests that 20,000 frames-per-second would cover almost all possible scenarios.
“The visual acuity is only that fine,” he reminds viewers, “in the center portion of our vision, the fovea. But then again, we never know what part of the path of the animation will cross our fovea… Here’s the thing, if we did construct a screen and fed it 20,000 frames-per-second, what would that flying object actually look like? Assuming we’re not looking directly at it? Well, it would look like a blur. So why not lower the frame rate and add motion blur? Now in real life filmmaking, we don’t need to add motion blur, it’s part of the process of shooting an image.”
And this gets even more complicated because of another important aspect of human vision: We don’t actually pan our eyes linearly. For questions about movement, motion blur and similar phenomena, he refers viewers to the site Blur Busters.
“Every bad idea, every bad argument in this field comes from reductionism — the oversimplification of topics in order to make them consumable by the populace.
“Our eye sees all frame rates, and they all have their look and feel,” he concludes, “so we should approach frame rate for what it offers aesthetically. As such, cinema remains at 24 because [that frame rate] is aesthetically pleasing and culturally significant… video games can take whatever frame rate your graphics card can muster, because that too is aesthetically pleasing: different mediums, different forms of expression, different frame rates.”
“Asteroid City,” Wes Anderson’s new film, is set in a midwestern desert and involves a group of scientists, military personnel and students who encounter an alien at a meteor crater.
Cinematographer Robert Yeoman faced challenges shooting under bright, direct sunlight in the desert, maintaining consistent light in day exteriors, and working with a minimal crew and no artificial lighting.
Yeoman used techniques from early filmmakers, such as shooting outside with some bounce to compress the contrast range and using sunlight for interiors.
Anderson’s preference for single camera setups and choreographed camera movements contribute to his distinct filmmaking style.
Despite Anderson’s films being meticulously planned, Yeoman emphasizes the importance of flexibility and spontaneity on set, pushing himself to create visually interesting shots.
Pictured above: Jason Schwartzman as “Augie Steenbeck” and Scarlett Johansson as “Midge Campbell” in Wes Anderson’s “Asteroid City,” a Focus Features release. Credit: Courtesy of Pop. 87 Productions/Focus Features
Wes Anderson’s new film Asteroid City is set in a Southwestern desert in the 1955, but really in a place that could only exist in Anderson’s imagination.
The story, involving a number of scientists, military personnel, and “Junior Stargazer” science students, finds the group gathered at giant meteor crater for a ceremony honoring the children’s inventions, only to find everything thrown into chaos when an actual alien arrives.
Shot in the flat desserts of Chinchòn, Spain — a small town not far from Madrid — cinematographer Robert Yeoman, ASC had to face a lot of things that people in his field generally try to avoid like the plague: Shooting under the kind of bright, direct sunlight that tends to yield blown out highlights and extremely deep shadows.
But Anderson’s entire aesthetic and approach to filmmaking would make it highly unlikely he would ever try to make life easier by shooting on a soundstage (or, God forbid, in front of a volume!).
Yeoman, who has shot all of Anderson’s live-action films since his directorial debut Bottle Rocket, decided to address the situation in the way the earliest filmmakers did. They shot outside with just some bounce to compress the contrast range somewhat and they used sunlight for interiors, too.
As Yeoman told Jim Hemphill at IndieWire, “I knew that inside locations like the diner would need light, so I just asked if we could put skylights in the diner and any building where we knew we were going to be shooting day interiors.
“[Production designer] Adam Stockhausen put skylights in and we covered them with very soft diffusion material and it worked out beautifully. We never used any lights, and that was Wes’ dream,” he said.
“It was a constant challenge maintaining consistent light in the movie’s many day exteriors. On other movies, I would be tempted to bring in some big lifts and put giant silks up to soften the light, but it was too windy out there in the desert.”
The filmmakers watched some movies shot in similar locations with minimal amounts of artificial light, such as Wim Wenders’ Paris, Texas, shot by Robbie Mueller, and the DP started to feel comfortable leaning into the approach.
He also explains to Hemphill that it’s not just lighting that Anderson likes to keep simple, it’s also crew. And the virtual elimination of lighting units went a long way to keeping things light and nimble.
“Often,” Yeoman says, it was “just [Anderson], myself operating, a focus puller, a second AC with a slate, a dolly grip and a boom guy,” While the director does use a small, handheld monitor as a small concession to 21st century convenience, “he sits next to the dolly [and] watches the actors.”
From the time Anderson directed the animated Fantastic Mr. Fox (shot by Tristan Oliver), he fell in love with the idea of creating animatics and, on his next live-action film, he swapped out the storyboards he’d previously relied on for the more fleshed-out moving approach of the animatic and he’s never looked back.
In an interview with Tim Molloy at MovieMaker Magazine, Yeoman notes that while Anderson’s films are famously planned out in great detail, with camera positions and movements designed into the animatics, he still likes to show up to the set with some sense of flexibility, rather than having an exact plan for everything that happens that day.
“‘I want to be open to something that spontaneously might happen on set,’” he explains. “‘I do make my own diagrams of camera positions and lighting ideas that I share it with my gaffer. But there’s always some nervousness when I show up in the morning, because I want to push myself to that place where you’re not sure about things.
“When I feel totally secure and confident and sure, I might lapse into something a little more conventional, whereas if I keep my edge, I might come up with something that’s a little more interesting visually.’”
The cinematographer is very supportive of Anderson’s single camera approach, despite the fact that second, third and even fourth cameras are very common on movie sets. “I find that we can often move faster with one camera than with two cameras,” he says.
“Two cameras often means you’re getting coverage. Coverage is great, but with Wes, every shot is specific for the particular moment that he wants. A lot of great directors have a distinct style because they believe there’s one place to put a camera and to tell a story, and that’s the place we’re going to commit to.”
Of course, a Wes Anderson movie wouldn’t be a Wes Anderson movie without enough choreographed camera movement to make a Max Ophüls film feel like Andy Warhol’sEmpire in comparison. “There’s a scene where Jeffrey Wright is giving a speech,” the DP told Esquire’s Tom Nicholson. “Wes was eager to do it all in one shot, he [Wright] walks to one end of the stage and the other, so we had to make a track that would go sideways and in and out — typically you might use a Steadicam or a Technocrane to do a shot like that. But because Wes is so precise on the framing and compositions, even if you’re off a little bit, that’s not acceptable.”
To continue the conversation about camera movement, it would be appropriate to cite what may well be the only profile of a dolly grip ever to appear in TheNew York Times. Written by Melena Ryzik, the piece about Mumbai-born Sanjay Sami, who started out working on Bollywood movies and has been designing track and pushing and pulling dollies on Anderson’s movies since 2006, references a scene in which Adrian Brody makes his way “through a long theater space in an exquisitely detailed choreography of sets, props, walls, actors, dialogue and camera,” which, as Brody explains to Ryzik, “has to come off of a set of tracks and then be loaded seamlessly onto another set of tracks and hit numerous precise marks at very specific timings.”
“The thing I love is, with Sanjay, we essentially are using the same equipment that we might have used on a movie 75 years ago,” Anderson told Ryzik, “but we’re arranging it in a way that it hasn’t been arranged before.”
“In Paris, any time I walk down a street I don’t know well, it’s like going to the movies,” says the director.
July 2, 2023
Renard Jenkins: What’s Now and Next for Streaming Video
TL;DR
SMPTE President Renard P. Jenkinssays streaming media is no longer a platform, but is rather an environment: “Streaming media is in constant motion, and we have to think about it that way.”
He asserts that subscriber churn is actually a good thing, providing a dynamic audience to which engineers must adapt.
Streaming hasn’t necessarily tipped the scales from a profitability standpoint, but its social impact in terms of change and disruption has moved the industry forward by leaps and bounds.
Renard P. Jenkins, current SMPTE president and until recently the senior VP of product integration & creative technology services at Warner Bros. Discovery, delivered a fascinating keynote address at the Streaming Media East conference.
The primary question he set out to address — “Where do we go from here and how the heck do we get there?” — is obviously one everybody at a conference about streaming media wants to hear about from someone in Jenkins’ position. Watch the full keynote session in the video below:
Streaming as a technology has expanded enormously in recent years, but building stable, predictable business models has been elusive.
Perhaps, Jenkins offers, people are looking at streaming the wrong way.
Starting off with a metaphor, he points to the literal platform he’s standing on and then elaborates, “streaming media is no longer a platform.” A platform, he explains, is solid. It doesn’t move. Streaming media is “an environment. Streaming media is in constant motion, and we have to think about it [that way].”
He acknowledges that people are worried about the economics of streaming in the near term, but by way of providing perspective, he asks his listeners to think about the technical progress that has already taken place. “A few years ago, the live streaming experience if you were a user was ‘interesting,’” he says, pausing for effect. “Interesting is probably the best word I can find for it,” he laughs. But he points to the vastly increased quality of streaming delivery in just the past few years and the significant growth of adoption, pointing to roughly 1.2 billion subscribers globally. “When you have that kind of growth, you can see that the reach of this industry and the reach of these products is enormous.”
He also points out the rapid expansion of streaming since the start of the pandemic, with first-run feature films premiering over streaming services and the great success the NFL has had in an even shorter timespan with Thursday Night Football.
He credits that growth to innovations that continue to take place. “CDNs are evolving,” he says, “Codecs are changing to be able to compress [more data without] losing quality.”
In the big picture, Jenkins sees a growing industry, though he admits, “we are seeing… churn from a subscriber standpoint, and some people are afraid of that, and they panic.” But Jenkins asserts that churn is actually a good thing. “You want an audience that’s dynamic. We can’t be afraid of seeing people change or move. But we have to adapt. And as engineers, that’s what we do. We adapt, we build, we change, we motivate, and we move things forward.”
It’s no secret, he says, that streaming has not necessarily tipped the scales from the profitability standpoint. “But if you take a moment to step back, and look at it in terms of social impact, industry impact, in terms of change and disruption, streaming, has moved our industry forward by leaps and bounds.”
Customers are excited about what they can do. “Everyone who had been stuck in the cable world said, ‘I am finally getting the opportunity to go to that ala carte service menu that I always desired without having to be stuck holding 90 to 100 channels I didn’t.”
As the opportunities in streaming expand for creators and consumers, he says, churn is inevitable, but with that expansion of options, companies can no longer count on holding customers out of sheer inertia. There is simply too much competition and so this is the time for streaming media companies to build offerings that people want and let the cream rise to the top.
“We have to make sure, as engineers, as business owners, as streaming providers that we are ready,” he warns. “That means that our audience is going to continue to grow. That means that CDNs have to actually become faster, have to become more robust. One day, maybe we can push that 8K, or that 12K file.”
He stresses the importance of three overarching factors for companies to rise to the top of this increasingly competitive environment. Engineering, which is no surprise, and design, which, he offers, people might not think about sufficiently.
“If you are an electrical engineer, if you are a mechanical engineer, if you are a sound engineer, you were probably not taught design theory in any of your courses,” he elaborates. “You were taught to think about how to put systems together. You were taught about how to put mechanical gears and so forth together. But you were never truly taught design theory. That is one of the things that I believe every engineer, every computer scientist — all of us — needs to make sure we understand.
“That is what’s driving our innovation right now,” he adds. “That and what I believe is the most important job in our industry today: Data science. The data drives our direction. The data also drives the decisions that are being made.”
Scalable, modular approaches to building out streaming services, he explains, along with excellent data and analytics, will lead to successful business models, churn notwithstanding.
As an example of the power streaming can have beyond traditional entertainment delivery systems, he points to Rihanna’s performance at the last Super Bowl. “The number of people that streamed that performance was astronomical,” he notes.
“Now, that’s not my demo,” he admits, “I’m more of the Lou Rawls, Parliament Funkadelic guy. That’s my era.
“But I will say that my daughter and her friends were streaming that, and they could care less about the game. It was all about Rihanna. And that was made possible because the technology is scalable.”
Summing up, he declares, “We need to move away from the idea of streaming versus any of the other outlets. I think we need to start looking at how we optimize streaming.”
“The Frost:” An AI-Generated Film Created “With Human Artists”
TL;DR
Artists are often the first to experiment with new technology but the immediate future of generative video is being shaped by the advertising industry.
A new 12-minute short film, “The Frost,” from Detroit-based Waymark, is being held up as one of the most impressive, and bizarre, examples yet of generative video.
Will Douglas Heaven at MIT Tech Review predicts that we will start to see generative video used in martial arts videos, or music videos and commercials.
Artists are often the first to experiment with new technology. But the immediate future of generative video is being shaped by the advertising industry, which is leaning into the often disjointed, surreal and even horrific imagery that generative AI tends to produce.
For example, the 12-minute movie The Frost is being held up as one of the most impressive — and bizarre — examples yet of this strange new genre to date. Every shot is generated by AI (DALL-E 2 and D-ID mainly) yet, it was also prompted, tweaked and guided by humans at Detroit-based video creation company Waymark.
“[AI is] not a perfect medium yet by any means,” Josh Rubin, an executive producer at Waymark and the director of The Frost, tells Will Douglas Heaven at MIT Tech Review. “It was a bit of a struggle to get certain things from DALL-E, like emotional responses in faces.”
The Frost follows a fake beer ad, “Synthetic Summer,” from British studio Private Island, which was designed to showcase the video capabilities of generative AI.
According to Heaven, both examples play to the strengths of the tech that made them.
“The Frost is well suited to the creepy aesthetic of DALL-E 2,” he writes. “Synthetic Summer has many quick cuts, because video generation tools like Gen-2 produce only a few seconds of video at a time that then need to be stitched together.”
This may mean that we will start to see generative video used in martial arts videos, or music videos and commercials, he speculates.
More complex narrative video, however, still requires huge amounts of human creative input.
Perpetual Motion: Getting That 21-Minute Take for “Extraction 2”
From “Extraction 2,” courtesy of Netflix
TL;DR
“Extraction 2” has proven itself as a worthy sequel to its original counterpart, with stuntman-turned-director Sam Hargrave delivering intensified action sequences.
The Netflix action franchise film features an extraordinary 21-minute “oner” that includes a riotous prison break, a multi-explosion car chase in a forest, and a helicopter landing on a moving train.
The movie relied heavily on a well-prepared stunt team, with sequences featuring more than 400 performers. The car chase through the forest alone involved over 260 vehicles, some of which were uniquely modified for shooting. The audacious helicopter and train stunt required a novel approach, with rehearsals conducted on a semi-truck before moving to the actual train.
The action genre is currently working overtime, with healthy franchises and new ones emerging, including Netflix’s Extraction. The original 2020 film, directed by stuntman Sam Hargrave, had no right to expect a sequel even though they had hedged their bets with a scene at the end hinting at a continued narrative.
It was only after the original Extraction was produced and was audience testing that there was talk of a second movie and a possible franchise, as Hargrave told Brian Davids at The Hollywood Reporter. “When it was testing internally, we saw the desire of Netflix to have an action franchise in their stable. They were excited about the possibility and loved working with Chris [Hemsworth]. So we did three test screenings with audiences around town, and then they started to talk about a second movie internally.”
When a sequel was greenlit, Hargrave and his cast and crew achieved something unusual; they produced a film that was better than the original in almost every way and featured an impressive 21-minute “oner” in the middle.
One-take sequences are not unusual, but 21 minutes for pure fight-or-flight action are. Apart from the continuity demands, Extraction 2 needed months of preparation to make the three scenes work, including setting lead actor Chris Hemsworth on fire in the middle of a prison break, a multiple explosion car chase in a forest, and a helicopter on a moving train.
Hargrave summed up the prep work to Matt Fowler at IGN, “The rehearsal process (for the oner) was four or five months from conception to finding the locations,” he revealed, “mapping out the path and then getting the actors making all their moves.
“Then shooting took 29 days, I believe, to complete.”
During early conversations with producer Joe Russo, the idea of a more extended sequence came up. Hargrave told Variety’s Jazz Tangcay, “Joe said, ‘It’d be cool if we opened the film with Tyler (Chris Hemsworth) extracting someone from prison. Joe, he wrote this into the script, ‘And thus follows the greatest oner in cinema history.’”
Hargrave devised a plan forward, resulting in a 21-minute one-take sequence that sees black ops specialist Tyler Rake entering a prison to rescue the family of a violent gang member. As Tyler and the family members escape the prison and jump into armored vehicles, a chase ensues, and when they board a train, it comes under attack by gangsters who land a helicopter on the train.
As a stunt coordinator, Hargrave had been involved with one-take action before. He had been an MCU stunt master and Captain America stunt double; he worked on Charlize Theron’s Atomic Blonde with its 10-minute oner fight scene. The first Extraction may had its own 12-minute one-take, but nothing as complex and lengthy as the second movie.
The movie drew on his experience tying all his disciplines together, including operating the camera with fellow operator Nate Perry. Hargrave explained to Polygon writer Brandon Streussnig how his stunt regimen was a bonus when he carried the camera.
Hargrave’s stunt past prepared him for scenes like the “oner” in Extraction 2 — most notably as a camera operator. Hargrave takes an almost Buster Keaton-esque approach toward creating these incredible feats of human achievement and shooting them personally.
“The real challenge, truthfully, for me, is that a lot of operators and camera people could do a better job than I did, but there’s a certain weight of responsibility because of where I want to put the camera,” he says. “Sometimes it’s in a pretty dangerous spot. For example, in the second movie, when we were landing a real helicopter on a moving train, I wanted the camera to walk underneath the helicopter as it landed and then wrap around and see it leave. That’s a fairly dangerous stunt to pull off. I was blown off the side of the train. Luckily, I had a harness and a cable on during rehearsal.
“Truthfully, the main reason I do many of those things is not because I’m a better operator, per se. It’s just that I feel more comfortable putting myself in harm’s way.”
More details breaking down these three primary sequences come from Anna Menta at Decider. The prison break scene was filmed in two locations. The interior was filmed at Mladá Boleslav Jail in the Czech Republic, a former working prison now used exclusively for movie shoots, including Mission Impossible: Ghost Protocol. The exterior courtyard riot — the far more difficult portion of the scene — was filmed at an 18th-century grain storage facility.
The prison riot alone took more than four months of prep and featured upwards of 400 stunt performers and “special ability action extras,” some of whom were fighting Hemsworth in the foreground while others fought each other in the background.
“There were 75 stunt performers and a bunch of specialty backgrounds woven in there,” Hargrave told Menta. “That took three nights to do that exterior, that whole thing. I relied heavily on our amazing stunt team to choreograph the background fights. There are layers and layers and layers of stunts and background and extras.”
The car chase through the forest used many of the same techniques Hargrave used for the car chase in the first movie, but there were a lot more cars. The production rented over 150 vehicles and purchased 116 more modified for shooting, including SUVs modified to drive from the top and SUVs modified to drive from the rear. Co-stunt coordinator Noon Orsatti operated the SUV “driven” by Chris Hemsworth in the movie, while in reality, she was driving the car from the backseat.
Hargrave also wanted to up the stakes from the first film by bringing the camera in and out of the cars more often and getting closer to the actors during the chase.
Operating the camera was sometimes a matter of him offloading it to himself in a different shot, “In the car, for instance, it was like a contortionist ballet. I’m falling all over the place, hitting my head, and jamming my finger, but it all adds to the chaos of the sequence.”
As for the helicopter and moving train sequence, the team rehearsed the stunt by first landing the helicopter on a flatbed truck. “We started stationary with a semi-truck, and then the truck moving in a large open parking lot, and got to the speed we wanted to show that he could consistently do it. It took a couple of days,” Hargrave said. But beyond the pilot Fred North learning the stunt, Hargrave had to figure out how to film it.
“I was up on the top of the train, handheld, and as (the helicopter) came in, my biggest concern was not getting in the way of those moving blades,” the director said with a laugh. “I had to wait for the right moment, as he flares out, and then I ran towards it. It felt like running into a hurricane because the downdraft of that helicopter is powerful. So I have to run through that. I was as close as three feet, maybe four feet — I could reach out and touch Fred if I wanted to.”
Ultimately Hargrave’s philosophy on a successful oner is based on a different type of role playing. “A oner is more like a video game or an interactive play. Traditional cutting and coverage is traditional filmmaking, and that’s how it’s been done for a long time. So the idea of the oner is to follow a character through a scenario and experience it with that character in real-time,” he tells Menta.
“It’s its own version of forced perspective. Yes, the camera is looking where I want you to look. However, it feels more organic than putting the camera over here and then cutting over there and forcing the audience to see and feel something. The oner allows the audience to feel something during these action sequences, and it’s one way to differentiate yourself.
“I’ll never be able to out-kick, out-punch, and out-choreograph [Chad Stahelski or David Leitch] because they’re the best. So the best I can do is to offer a slightly different perspective on the action and say, “This is how I see it. This is how it’s fun for me to experience it.” And hopefully, audiences appreciate that when they watch it.”
Polygon’s Streussnig, however, recognizes the stuntman-turned-director as a savior of sorts. “With innovators like Sam Hargrave running around, throwing themselves underneath helicopters to get the perfect shot, the oner has been rescued just as it was getting stale,” he writes. “He’s found a way to extract it, if you will, from thoughtless, CGI-laden exercises and propel it to explosive new heights. If Extraction 2 proves anything, not everyone can pull these sequences off — at least not in ways that feel like they’re worth the effort.”
NAB Show New York will take place October 24-26, 2023, with exhibits running October 25-26, at the Javits Center.
It’s the most efficient and diverse event experience on the East Coast where thousands of content economy professionals reconnect, evaluate and rediscover ways to get more creative with new-to-market tech and existing tools.
In addition to the comprehensive exhibits and free on-floor education, NAB Show New York offers a range of conference programs designed to bridge the gap between process and product.
It’s all happening at NAB Show New York, taking place October 24-26, 2023, with exhibits running from October 25-26, at the Javits Center.
Produced by the National Association of Broadcasters and once again co-located with the AES New York 2023 Convention, NAB Show New York is the most efficient and diverse event experience on the East Coast, where thousands of content economy professionals reconnect, evaluate and rediscover ways to get more creative with new-to-market tech and existing tools.
Through exhibits, conferences and networking events, NAB Show New York offers broadcast, media and entertainment professionals the opportunity to refine their skills, learn new strategies and connect with industry experts to enhance their audio, photo and video capabilities.
“NAB Show New York offers a unique touchpoint for the industry, particularly for those who live and work in New York City and the surrounding region,” said Chris Brown, NAB executive vice president and managing director of Global Connections and Events.
“It is a perfect opportunity for content professionals to go deep with technology partners and peers to problem solve and reconnect, reevaluate and rediscover the transformative power of new technologies and existing tools. The pace of change in the industry requires an elevated and more frequent level of engagement, knowledge sharing and relationship building.”
In addition to the comprehensive exhibits and free on-floor education, NAB Show New York offers a range of conference programs designed to bridge the gap between process and product:
Add VIP treatment to your NAB Show New York experience with the NAB AMPLIFY+ package! Expedited badge pick-up line, networking events, a complimentary coffee bar and exclusive, year-round digital content. For more information on registration packages, and to register, visit the NAB Show New York registration portal.
With so much to do at NAB Show New York, it can be overwhelming to make your plan, so the NAB Amplify team is sharing our can’t-miss list.
June 26, 2023
Neal Stephenson: Gaming Has (Almost) All the Tech We Need to Make the Metaverse
TL;DR
Author Neal Stephenson says gamers and gaming tools are the fundamentals of the decentralized 3D internet.
Gamers need to be rewarded by being part of the decentralized Blockchain-underpinning of the metaverse.
Stephenson says AI lacks originality and detail and doubts it will produce sophisticated narrative 3D worlds, although it will be an assistant to get some of the way there.
These building blocks include the sophistication of game engines and “the fact that those game engines can be downloaded and used for free,” advances in power, and the lower state of the hardware needed to render three-dimensional imagery in real time, and — just as importantly — the user base.
“The people who’ve learned how to navigate 3D environments by playing video games must be past the one billion mark by this point,” he said.
However, if the next generation of the internet is to built on a model that decentralizes power and reward (which is a good thing in Stephenson’s mind), then support for those creators needs to be addressed.
“Right now, the skillset that’s needed in order to create metaverse experiences is basically what you see in the game industry. People who know how to use game engines and who know how to create the assets that feed into those game engines. We need to create the economic basis for them to get rewarded if they succeed in creating metaverse experiences that lots of people are enjoying.”
Lamina 1 is Stephenson’s own attempt to do this. It’s a blockchain on which to build the infrastructure of an open and decentralized metaverse that puts technology in the service of humans, not the other way around.
There’s no escaping the obligatory question these days on AI and its potential impact on the creative process. Are we getting close to being able to ingest an adventure book into AI, The Lord of the Rings for example, and generate a 3D world based on that story with characters and then be able to play it in VR?
Stephenson (of course) knows the folk at Weta VFX in New Zealand, which helped make The Lord of the Rings movies for Peter Jackson.
“If you watch those movies, one of the things that makes them great is the personal attention and care that’s lavished on every single detail of every costume and every prop,” he says. “So I don’t think we’re going to see work of that quality coming out of AI. Just because it requires that you do original thinking and come up with something different.”
Of Snow Crash and its Nostradamus-like foresight, he says, “I’d say I was the first to use that word [metaverse], not the first to imagine that. I mean, as I’m sure you know, with your background in the field, there were people thinking about similar systems before I wrote the book. Habitat being one example.
“The metaverse as described in Snow Crash was my best kind of guess as to what a mass medium based on 3D computer graphics might look like, but the metaverse per se in the book is neither a dystopian or utopian, or at least that’s how I meant to portray it.
“In the opening pages of the book our initial exposure to the metaverse is, is kind of very mass market, lowest common denominator, sort of crude, obvious, like the kind of the worst of television. But later on, as we get farther into the book, we see that people have used it to make beautiful works of art. And we see that there are some people who have lavished a lot of time and attention on making homes in the metaverse that are exquisite works of art.”
Marc Andreessen: How I Learned to Stop Worrying and Love AI
TL;DR
Venture capitalist Marc Andreessen is optimistic about the transformative potential of artificial intelligence, asserting that it will positively reshape society.
In a 7,000-word manifesto, he argues that AI does not present the dangers often depicted in films and won’t result in job losses. Instead, he suggests that technology, including AI, enhances productivity and stimulates economic and job growth.
The billionaire investor argues that AI is on par with or even surpasses landmark innovations like electricity and microchips, and thus, its development is a moral obligation.
Andreessen maintains that AI will aid in various roles from tutoring children to assisting professionals, leading to advancements in numerous fields such as medicine, arts, and climate change solutions. He also suggests AI can humanize various aspects of life and tackle challenges that were previously impossible to address.
Marc Andreessen, the influential venture capitalist and a key figure in the internet revolution, has long been a voice on the transformative potential of technology. His recent thoughts on artificial intelligence, as shared in an in-depth article and a revealing conversation with a16z general partner Martin Casado, demonstrate his belief that AI is set to positively reshape society, with profound implications for the media and entertainment industry.
In a nearly 7,000-word manifesto entitled “Why AI Will Save the World,” Andreessen explores the potential benefits of AI and advocates for its development and proliferation. He also acknowledges the public fear and paranoia surrounding AI, attributing it to a “moral panic” that often accompanies impactful new technologies. While not dismissing all concerns as irrational, he implies that such panic often magnifies potential issues to a level of hysteria that makes it harder to address legitimate concerns.
Simplifying AI to its bare bones, Andreessen describes it as a computer program that processes input and generates output, similar to how humans understand, process, and generate knowledge. This, he posits, is far from the ominous, world-ending machines popular culture often paints AI to be.
“[AI] is owned by people and controlled by people, like any other technology,” he writes, offering a description “of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.”
AI also won’t take our jobs, Andreessen maintains, debunking the “lump of labor fallacy” that states there is only a fixed amount of work to do in an economy at any time. “A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there,” he explains. “And that is why technology doesn’t destroy jobs and never will.”
Not only are our jobs safe, but AI will result in higher wages, he goes on to predict. “Technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends.”
Highlighting the ongoing influence of AI with the proliferation of tools like ChatGPT, Andreessen draws parallels between the potential of AI and the benefits brought about by human intelligence across a multitude of domains. He suggests that AI could greatly augment human intelligence, leading to advancements in a broad range of fields, from medicine to arts and even to climate change solutions.
“ChatGPT just wants to make you happy,” he tells Casado. “Right? It just wants to satisfy you, like it actually is trained on a system that basically says its role in life is to be able to make people happy,” he says, describing how the ChatGPT interface encourages users to rate responses with a thumbs up or down. “There’s this giant supercomputer on the cloud. And like, it’s just, like, desperately hoping and waiting that you’re gonna press that thumbs up button.”
The billionaire investor argues that AI is on par with or even surpasses landmark innovations like electricity and microchips, and thus, its development is a moral obligation for the betterment of our future.
“The stakes here are high,” he notes. “The opportunities are profound. AI is quite possibly the most important — and best — thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.”
Andreessen envisions an era where AI aids individuals in various roles, from tutoring children to assisting professionals, thereby enhancing decision-making and productivity, accelerating scientific breakthroughs, and fostering creativity. He believes that AI has the potential to humanize various aspects of life and can tackle challenges that have been impossible without it. He even suggests that AI could improve warfare by aiding better strategic and tactical decisions, “minimizing risk, error, and unnecessary bloodshed.”
The most underestimated quality of AI is “how humanizing it can be,” he writes. “AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.”
Andreessen also discusses the creative potential of AI, emphasizing that, for the first time, we have software that can create art, music, literature, poetry, and even jokes, with significant implications for the Media & Entertainment industry for everything from content creation to managing intellectual property rights.
“The creative arts will enter a golden age,” he says, “as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.”
Sorry, AI: Intelligent Storytelling Still Requires Human Creativity
From “Fathead,” produced by Tom Thudiyanplackal
TL;DR
AI for production should be thought of and used as an assistant, says technologist and producer Tom Thudiyanplackal, because currently generative AI tools only deliver approximations.
There are dangers in mixing the iterative nature of AI with the non-iterative nature of the day of actual shooting in a virtual production set.
Studios are building their own AI models trained on data they own to speed production and safeguard against IP challenges.
“While tools like ChatGPT and Midjourney are being made free to us today, soon, they won’t be free,” warned technologist and producer Tom Thudiyanplackal. “So, we don’t want to be too addicted to them or too dependent on them.”
Thudiyanplackal produced the experimental virtual production short film Fathead for ETC (Entertainment Technology Center). He believes the intelligence that we’re transferring to the AI is really a human intelligence.
“We should never forget that. It is always machine learning, it’s learning from us. So we have to retain our intelligence, we have to retain what we wish to achieve with certain tools [and retain] goals for our creative outcomes.
In a presentation for the vETC virtual conference at NAB Show, Thudiyanplackal shows how it is possible to script a scene and create a character using text prompts. In doing so, he is highlighting AI’s limitations and the need for humans to create something worth using. Watch the full presentation in the video at the top of the page.
“It’s able to do some of these things pretty well [but] there’s still a certain degree of accuracy that’s missing. These are the places where you cannot wholly depend on it. You still need to have your basic knowledge in whatever area that you’re wanting to use an AI for.
“So, you need to know how good screenplays are built, what dramatic structures [look like], what good plots are, how to build good information that makes a character interesting to the audience. There are no shortcuts for those things, so you do need to do the legwork.”
Human talent and knowledge of story is needed to decide and build into an AI all sorts of nuances including exactly what scenario the drama is playing out. What’s the conflict and how does it unfold in that character’s journey?
“Those are things that you’re weaving into the story and not something that’s going to come from the AI software. These are things that you do need to go and learn from masters, observe their work, watch a lot of movies, go work with somebody as an assistant.”
Where Thudiyanplackal believes AI scores currently is in ideation (such as ideas for the design in pre-production stage of marketing posters) and the ability to give everyone the tools to draw, storyboard or create 3D models. Even here, though, he advises users to be wary of certain anomalies that will occur once in a while.
“If there’s a cow standing on one side of a building, the front of it sticking over here, and there’s another cow standing on the other side of the building, with just the back of it sticking out, the AI could just counted that as one cow — one very long cow. Those errors have to be tracked by a human. With all of these AI tools it starts with approximations and then you have to keep massaging it to get to a certain level.”
This matters a lot if you’re are building scenes and characters for eventual use in a virtual production set-up with live action. It is becoming increasingly clear to all who use virtual production that success using this method is all about pre planning.
You cannot arrive on a virtual production set with an approximation of what you are going to shoot — it needs to be exactly what you are going to shoot.
“A lot of people heard the early advertising around virtual production and got very excited believing that at the flip of a button, we could be at Rome in the morning, and then Paris doing a night sequence. The issue is that directors would show up on the day and request for different variations, which wasn’t discussed prior… and they would get disappointed because those changes are not possible,” he said.
“You couldn’t request for something on the wall that we hadn’t already planned, built, tested, and pre-written.”
In other words, even with AI, we are not yet at the stage of real-time production on a virtual set (although there are tools such as Seyhan Lee’s Cuebric that purport to do just that).
Thudiyanplackal also tackled the questions of copyright and ethics and the legitimacy of algorithms training on human artists’ data. He says studios have been very careful to date about using AI because they don’t want to take the risk of breaking the law.
“So, when they want to experiment with AI in their pipeline, they’re being very specific,” he says. “Rather than using this very general tool, which applies to an entire project or an IP, they’re more interested in something very specific where an AI generates quick 3D meshes to build 3D environments, for example.”
He cites the case of AI being used to de-age Harrison Ford in Indiana Jones and the Dial of Destiny. “That’s a very specific thing that they’ve done. They use their own training data, because Lucasfilm had done so much work with them. They had all that training data from that age that needed. It all comes down to that baseline training data, to protect IP and to be safe.”
He says that studios plan on building their own AI models using proprietary training data. “That why [studio] adoption is slow, but those are things that are in discussion.”
Aimed at empowering creatives, the AI Creative Summit will be held at NAB Show New York Oct. 24-25.
June 22, 2023
As AI Expands the Media Universe, We Are Going to Have a Ton of Data (Like, So Much Data)
TL;DR
The 2023 NAB Show Streaming Summit gathered a panel of industry experts to explore how the latest AI applications are accelerating the content pipeline in broadcast, film, television and game development.
Moderated by NVIDIA global broadcast industry marketing and strategy lead Sepi Motamedi, the panel included NVIDIA execs Ian Andes and Rick Champagne alongside Oracle VP Alesandra Madurowicz.
Oracle and NVIDIA have a multi-year partnership to bring the full NVIDIA accelerated computing stack to Oracle Cloud Infrastructure (OCI) in order to speed AI adoption in enterprise.
NVIDIA recently launched AI Foundations, the company’s new generative AI cloud services, which are available through DGX Cloud on OCI. These include the NVIDIA NeMo language service for translation and localization, and Picasso, a suite of image, video and 3D services.
Artificial intelligence is here to stay and our understanding of its capabilities is growing each day, transforming the future of work and perhaps even the fabric of our daily lives. Media & Entertainment companies are rapidly moving to adopt AI applications that will help optimize their operations, improve audience engagement, and streamline content creation and distribution workflows.
A panel of executives from NVIDIA and Oracle convened in the LVCC’s West Hall at the 2023 NAB Show’s Streaming Summit to discuss how AI-driven analytics are being used to make sense of the vast volume of data generated across the M&E value chain and what the future of this technology holds for the industry.
Watch the full NAB Show 2023 session, “AI and the Expansion of the Media and Entertainment Frontier” above.
“AI and the Expansion of the Media and Entertainment Frontier” highlighted the latest AI applications accelerating the content pipeline in broadcast, film, television and game development. The panel discussion was moderated by Sepi Motamedi, global broadcast industry marketing and strategy lead at NVIDIA, and included Ian Andes, senior business development manager for M&E at NVIDIA alongside Rick Champagne, NVIDIA’s head of Industry Strategy and Marketing for M&E, and Alesandra Madurowicz, VP of Media & Entertainment Streaming at Oracle. Watch the full session in the video above.
The discussion kicked off with a recap of the multi-year partnership between NVIDIA and Oracle announced in late 2022, which is intended to help enterprise customers solve business challenges with accelerated computing and AI. The collaboration aims to bring the full NVIDIA accelerated computing stack to Oracle Cloud Infrastructure (OCI), including the addition of tens of thousands of NVIDIA A100 and H100 GPUs to its capacity and NVIDIA BlueField-3 DPUs to its networking stack.
This partnership has enabled initiatives like NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, which leverages OCI to provide more than 100 frameworks, pretrained models and development tools to streamline development and deployment of production AI, including generative AI. Other initiatives include DGX cloud, a multi-node AI-training-as-a-service solution, providing enterprises their own AI supercomputer in the cloud.
“For those of you who aren’t familiar, NVIDIA is now a full-stack company,” Champagne said. “We have everything from the GPUs that you know us for. But we also have branched out, in fact, in the last five or so years that I’ve been at NVIDIA, we’ve more than doubled in size in terms of employees. We have more software developers now than we have hardware engineers. The company has completely transformed itself.”
This transformation has enabled the recently-announced NVIDIA AI Foundations, the company’s new generative AI cloud services, which are available through DGX Cloud on OCI. These include the NVIDIA NeMo language service for translation and localization, and Picasso, a suite of image, video and 3D services allowing enterprise customers to build proprietary, domain-specific generative AI applications for professional content creation, digital simulation and more.
Content creation generates monumental amounts of data, on both the creator and the consumer sides, but AI can help us understand that data, the panelists agreed. If content is king, then “data is the queen,” said Madurowicz.
“A lot of service providers are thinking about, ‘we need to create more content, make it more engaging, you know, this app, just trying to pump out as much content as possible to stay competitive,’ when in actuality a lot of the platforms that are maintaining the engagement and the retention are really about the personalized and the personalization of the experience of that content,” she explains. “And that’s data. That’s not the content itself, that data is empowering, that those content decisions, and consumers are having the ability to kind of choose their own adventure, based on historical data or real-time data.”
The sheer potential of what can be done with all this data can be overwhelming, Madurowicz continues, but she believes it should be used to create personal recommendations for consumers. “But we’d be remiss to gloss over how AI and AI-driven analytics will affect the internal decision making of, you know, trend forecasting for certain content, when to launch, what systems are operating the most efficiently,” she adds. “It’s the full landscape of where data and analytics are combined for the end experience.”
Building a strategy for AI is vital, Champagne said. “Whatever that strategy may be, you really need to think through what’s going on out there, and how you’re going to how you’re going to work in this new world, how it’s going to help accelerate your business outcomes, how it’s going to help accelerate the work your employees do.”
But AI can do more than simply speed things up, Andes suggested. “The other thing that AI allows you to do is not only create more content faster, but improve the quality of the content being created,” he said.
The proliferation of AI also means that it’s available to organizations of all sizes.
“A couple of years ago, [AI] used to be only for the big, big companies with all the money,” Andes said. “But I think about the power behind the systems and the power behind the AI and how inclusive it is now, so that everybody can partake. And then I think about what used to take seven years, and then got cut down to four years,” he continued.
“The turnaround time for you to be able to go from concept to final output has been fractionalized. So not only is that great for the planet, but it also means that you get to monetize your results faster. Or, if you’re in a broadcast news environment, you get to get that finish that final product out to your end customer that much quicker, which can make a really, really big difference to them.”
Twisted (Twin) Sisters: Double the Cinematography for “Dead Ringers”
TL;DR
Starring Rachel Weisz as the Mantle twins, Amazon Prime Video series “Dead Ringers” nods to the 1998 David Cronenberg movie but changes the fundamentals.
Laura Merians Gonçalves, one of the show’s DPs, said showrunner Alice Birch and lead director Sean Durkin referenced Ingmar Bergman’s “Persona,” Robert Altman’s “Three Women,” and Andrzej Zulawski’s “Possession” for the series.
The work of Thomas Eakins, who executed some of the first paintings of surgeries, the photography of Gregory Crewdson, and Heji Shin’s birth portraits also served as points of reference.
VFX supervisor Eric Pascarelli employed a motion control rig with a Panavised Sony Venice camera to replicate Cronenber’s “twinning,” or shooting one actor who plays twins.
2023’s Dead Ringers gave the nod to David Cronenberg’s 1998 movie through its six-episode Amazon Prime Video run, but changed the fundamentals. This time the twin OB-GYNs would be female, played by Rachel Weisz, with an emphasis on exposing some of the horrors hidden in plain sight for pregnant mothers in the 21st century. The new Dead Ringers was happy to delve into those but still cherish what had come before.
The production would follow the sisters’ mental descent as all their dreams come true with the opening of their birthing centers. One of the show’s DPs, Laura Merians Gonçalves, explained the narrative changes and the references that guided them in an interview for the Panavision website.
“Every episode has a slightly different tone, so the references changed as the visual style expanded with the narrative. The showrunner Alice Birch and lead director Sean Durkin and I discussed Bergman’s Persona, Altman’s Three Women, and Zulawski’s Possession.”
The color red was also a touchstone for the show and another salute to the 1998 movie. Gonçalves explains how the color grounded the plot, “When I started prepping, we knew the style of the show would shift to incorporate a lot more color, specifically reds. Dialing in the red was a huge focus for us. It was sort of a nod to the original, that these characters are wearing red scrubs, and many different reds — from blood red to scarlet — get introduced in the Mantle birthing center.”
The design of the new birthing centers needed to be starkly different compared to the bland Westcott Memorial, where the sisters used to work. Gonçalves described to Daniel Eagan at Filmmaker Magazine the multiple locations that completed the new center. “We needed to pay attention to how the environment would be observational and private simultaneously, a kind of bespoke experience they wanted to create for their clients. A challenge was that the center was a combination of several locations.
“The exterior was a carousel in Battery Park City with VFX added on, the atrium was a gym, and the embryology lab was the old TWA terminal at JFK. We had about six sets that we used for different things.”
The age-old technical hurdle of twinning, shooting one actor who plays twins, was also something Dead Ringers duplicated from the 1998 movie. They would use a motion control rig but this time with a Panavised Sony Venice camera.
Split-Screen Magic Trick
Using motion control determined a highly disciplined process on set, as all departments had to act as one. The show would need a VFX supervisor to check when composites were required, especially when the sisters were close to each other.
“We wanted to make sure that everything was laid out so that the acting could shine,” Pascarelli said. “It was not necessarily the most efficient way, but the things we could have done to save time on set would have required cheats [in post] that would have kept us from knowing what we were getting on the day.”
For each scene where Weisz acted opposite herself, the screen would essentially be divided into an “A” and a “B” side. The first takes would be Weisz on the “A” side of the frame — typically portraying Elliot, the more prominent personality — opposite her double, Kitty Hawthorne; once the filmmakers had a few takes that they liked, the best one would be selected, and the production sound mixer quickly edited together a track of Weisz’s dialogue. At the same time, she changed into costume for Beverly.
Then, Weisz returned to shoot the “B” side of the frame with an earwig playing the dialogue in her ear while Hawthorne and the other actors pantomimed the scene.
“The video assist triggered everything,” Pascarelli said. “It was the timecode master of the whole process, and it synchronized audio and the lighting board.”
The filmmakers created what Pascarelli described as “a crude, hand-operated split screen” to watch the interaction between the twin characters on the video assist system and ensure the eyelines matched and everyone was happy with the performances.
The key was to get clean shots of Weisz as both Mantle twins, so occasionally Hawthorne would duck in the middle of a shot while Weisz crossed her; on a few rare occasions, the blocking required that Pascarelli use deep fake technology to map Weisz’s face onto Hawthorne’s. However, that was only done five or six times, and the filmmakers tried to avoid it.
“The problem with that technology is that whatever Kitty’s acting becomes Rachel’s. Since this is a show about Rachel’s performance, we tried to limit those situations to ones where the second twin was standing there stone-faced,” Pascarelli said.
He felt the series had in common with Cronenberg’s original desire to keep things simple whenever possible. “We wanted to be as sparing as possible with the effects and keep the show-offy things to a minimum.”
The split-screen magic trick works beautifully, with an outstanding performance by Weisz to cement it. Angelica Jade Bastién at Vulture had this to say about her portrayals:
“Her performances shimmer with a feminist current that charges every rise of her voice and every gesture of her body.
“It feels like the apotheosis of what she has demonstrated before and then some — a gentle beauty complicated by fierce intelligence, a graceful presence stitched through with ungainly wants, a voice that flows like the tides. Dead Ringers is the ultimate acting opportunity: Rachel Weisz topping herself.”
Cinematographer Larkin Seiple goes from restrained to “full Coen Brothers mayhem” for the Netflix series “Beef.”
June 20, 2023
Capturing the Chaos: Cinematography on the Next-Level Bonkers “Beef”
TL;DR
Starring Ali Wong and Stephen Yeun, Netflix series “Beef” was shot by “Everything, Everywhere All At Once” DP Larkin Seiple.
Seiple opted to use the ARRI Alexa LF equipped with Zeiss Supreme Primes to shoot the series, with the Sony Venice 2 employed for the nighttime car crash scene.
The production team worked to find the most realistic way to light, design, and clothe the actors while at the same time also trying to give hints and clues and absorb their personalities in those choices.
Creator and showrunner Lee Sung Jin said his idea for the show was loosely based on a road rage incident he experienced.
It is perhaps moot that a giant Netflix episodic should have an aesthetic that lets its story breathe during a WGA strike when writing itself is being reconsidered. Beef was shot by Everything, Everywhere All At Once DP Larkin Seiple; he deliberately downplayed the presence of his cameras to “get out of the way” of the story.
Larkin explained what that meant in practice. “We opted to minimize the amount of coverage we did and stick to the basics. Can we do this whole scene as a close-up? Can the camera stay with either Amy [Ali Wong’s character] or Danny [Stephen Yeun’s character]? We didn’t want to cover it as much as most shows cover things; we wanted to see how little we could do.”
This condensing of the coverage extended to all departments. “In collaboration with our costume designer Helen (Huang), and our production designer Grace (Yun), we tried to find the most realistic way to light, design, and clothe the actors while at the same time trying to give hints and clues and absorb their personalities in those choices as well.”
It’s a shooting design without ego, shying away from extreme filtering, flares, and shallow depth of field — but finding that they are much more effective if and when they appear.
Even Larkin’s lens choice shunned the limelight. “I think this is the first show that I picked lenses that were sterile,” he said. “We wanted lenses that could get close to the actors that didn’t distort. Normally I’m a fan of using vintage glass with more character. On this show, it just felt that the intentions had to be bare and honest. We didn’t want to create subjective imagery; we wanted to create subjective shot structure. We wanted to be with the actor the whole time. You’re bound to these two terrible people.”
Iain Blair at Post Perspective had more details on the gear choices Larkin made: “We shot it on the [ARRI] Alexa LF but with the Sony Venice 2 for the night car crash scene, and we used Zeiss Supreme Primes. We went with sharper glass because, in our testing, we found out that we could degrade the image with more control in post,” he said.
“We also shied away from super-flary or vintage lenses, as we felt it was too affected for the story and put too much emphasis on the filmmaker instead of keeping the audience with the characters.”
Beef follows the aftermath of a road rage incident between two strangers. Danny Cho, a failing contractor with a chip on his shoulder, goes head-to-head with Amy Lau, a self-made entrepreneur with a picturesque life. From then on, it was an actions-have-consequences blowout.
Lee Sung Jin is the creator and showrunner of Beef, and his idea for the show was loosely based on a road rage incident he experienced. “I thought there was a show there about two people who are very much stuck in their perspectives and have a lot going on in their individual lives that this incident unravels.”
In her review of the series, IndieWire’s Sarah Shachat describes a camera that traps the characters. “The show’s visual language remains restrained throughout most of Beef, showing the characters only in the light that they deserve; the camera traps them in single shots that go on just long enough to hint at how tenuous a grip Danny and Amy have on their lives but without the sweeping camera movement that announces a Capital-O Oner. But these shots are still precisely timed so the audience can be thunderstruck,” she writes.
“But Beef can’t live in the moment forever. The visual language of the show expands from, in Seiple’s words, invisible and observational to ‘full Coen Brothers mayhem’ by the series’ end, with some especially wonderfully bold color choices taking place in Episode 9 during a heist gone wrong at the home of the wealthy Jordan (Maria Bello).”
Beef’s road rage sequence at the start of Episode 1 also acted as the trailer for the show, so it had to entice you into the story — the coverage meter ramped up to 10 for it.
Larkin broke down the scene. “Our original choice was to keep the camera in the van with Danny to see his POV and face. So we could feel what it was like actually to chase someone. But we felt that shot this way you wouldn’t understand how dangerous it was,” he said.
“Most of the sequence is very much with Danny, and we put him on a car rig which is a platform that has another driver behind it. Stephen’s reactions are real, so he asked us to slow down as it was terrifying. The other camera angles were very close to Danny’s car; there’s never a wide shot, drone, or Technocrane shot, nothing fancy.
“We kept it grounded and concentrated on showing how much he wanted to see who was in the car he was chasing.”
Larkin recounted how the scene was shot to interviewer Nathaniel Goodman, ASC during an episode of the ASC Clubhouse Conversations podcast. “So we mounted three cameras to his car. And it’s not like a process trailer. It’s like he’s swerving through traffic. And you’re seeing him adjust to the inertia and dodge traffic. And that’s what makes his POV shots successful, the shots on him, and it feels like he’s making these terrible choices,” he said.
“It took some convincing to do because it’s always tricky to ask production to spend the extra resources on something that you couldn’t just do on a process trailer; it’s not going to feel the same. It has to feel 100% like he’s driving.”
After the scene, Larkin’s coverage dies down to the minimum, swapping it out with scenes that linger with the leads in either sweaty close-ups or their own style of wide shots, which Larkin explained.
“We shot Ali and Stephen on wider lenses than all the other characters in the show. So you’d naturally get that subjective sense that you’re closer to them, and maybe their ideas are a little wonky.
“It sounds weird, but we built the concept of it, and then we also just felt it, like we would walk up and watch the rehearsal and say that ‘the camera should be here’ and then find the lens that would make that work rather than saying that it’s a 52mm or whatever. We’d feel the relationship with the actors and find the lens that matched that.”
As only his second TV show, Larkin is getting used to shooting at the speed of an indie movie for 60 to 70 days. “It’s tricky for me, different.” His next project was a movie that shot for the same time but for a two-hour film. Let’s hope we don’t lose him to the longer form.
Vying for Eyes: Investments in the Attention Economy
TL;DR
A panel of experts at the 2023 NAB Show shared their advice on how to grab a viewer’s attention within just a few seconds or risk losing them to the next reel or a channel change.
Ideas include: elicit a human emotion, provoke, give the audience just enough to keep them engaged, and keep content authentic to audience and platform format.
The panel also touched on the impact of AI, agreeing that brands should harness the mass of content creators now able to create polished content using generative AI tools.
Grabbing people’s attention in the first few seconds has become de facto metric for brands and advertisers that applies across mediums from TV to TikTok. When the average person is exposed to as many as ten thousand ads a day, the aim is to hook audiences quickly to generate brand awareness, but how do we cut through the noise and effectively grab a consumer’s attention?
A panel of experts at the 2023 NAB Show took to the stage to share their views in a session entitled “How to Stand Out in a 3 Second World.” You can watch the full session, moderated by Upworthy VP Lucia Knell, in the video below:
Watch the full session, “How to Stand Out in a 3 Second World,” above.
“The three seconds should be inviting someone in, opening the door, and then, like, keeping them on long enough so that they actually stay and meaningfully engage with the content,” said Clare Stein, executive creative director of ATTN.
ATTN has a creative checklist to gauge whether a piece of content is gaining attention. “It’s not a science, it’s not something that we’re like bureaucratically crossing things off, but it can help.”
Stein suggests creating a “curiosity gap” right upfront in the sense of explaining to the audience what they’re going to get in the video, but not giving it all away. “Otherwise, they’ve kind of gotten what they’ve needed, and they can move on. But you also want to show you’re providing some real value to the audience.”
Another best practice is to elicit human emotion as a compelling way to open a video.
“I see a lot of clients who want to open a video with beautiful aerial drone shots to set the tone, and maybe that’s great in a longer-form documentary, [but] that is the worst possible way to start a video [on social]. You need to give the viewer someone to connect with. I think it’s just like human psychology.”
Her third tip is to provoke and surprise. “It’s really hard to break through if you’re not doing something different,” Stein said.
She cites a viral video for Adidas promoting a line of shoes the company had made from recycled ocean plastic. “Our opening clip was a squid trapped in plastic that was really close up, you couldn’t quite tell what was going on. And we had a lot of internal debates of, like, is this a good opening? Like, it’s not a person, we can’t really tell its nature, we can’t really tell what’s going on. But because of that I think it caused people to continue watching the video and ultimately led to its success.”
Chris Di Cesare, head of creative programming at Dice Creates, said that judging the success of a campaign ultimately means “becoming a part of the cultural zeitgeist.”
Added Ian Grody, chief creative officer at Giant Spoon, “When the work that we do is getting picked up in The New Yorker and it’s on the news and people are talking about it on social, that to me is true, meaningful success. It’s success that has resonated in a truly authentic way. And it’s not something that we fool ourselves into believing is success. It’s something that we’ve earned.
“We are building these Trojan horses that contain brand messages,” Brody continued, “but [they] need to arrive in the form of culture that you seek out, that you would pay to watch, pay to attend, instead of the thing that you pay to skip.”
The panel also touched on the impact of AI, viewing it generally as a force to be harnessed. When anyone can generate content in the style of, say, Wes Anderson, or Marvel or Star Wars, then “increasingly the emerging coin of the realm is going to be originality,” says Grody.
“Originality is going to become the most precious commodity in the advertising space, full stop. It’s going to be those people who can really operate at a high level, generating original work that are going to continue to break through as the playing field is levelled.”
Stein said she was excited and interested to see how creative agencies and brands can harness the “decentralization of creativity that exists when anyone can have a voice.”
She added, “One of the trends I really like seeing on TikTok is people making commercials for products and brands that sometimes are funny, and they’re parodies. Sometimes they’re really good.”
Agreeing with this, Grody said, “the fact that so many people who were creatively disenfranchised before now have the opportunity to make the stage and to have their work seen is a wonderful place for brands. [Brands] can continue to elevate co-creation opportunities because now you have all of these potential partners out there who are willing to participate, who are hungry for even more exposure, and brands can be collaborators, brands can be amplifiers.”
In order to do all of this effectively, agencies are advised to populate their team with “people who live and breathe the culture that you end up wrapping that story in,” said Grody.
“I have made it a huge priority, as have other people within the organization, to defy the advertising industrial complex, and reach out to different patches and pluck this incredible Ocean’s 11 of experts with unique passions that are meaningful to our clients. So, when we’re working on Activision, we have gamers on our team. The answer to me is that it all comes down to your people, who you staff, and making sure that you have the right people to meet all of these challenges.”
“Black Mirror:” Charlie Brooker Sees a Different Reflection
TL;DR
After years of exploring society’s dark absurdities, the sixth season of Netflix’s dystopian anthology series “Black Mirror” gazes at its own reflection.
In the episode “Joan Is Awful” we see deepfakes generating content tailored to individual users.
Charlie Brooker, the show’s sardonic mastermind, says he toyed with generative AI during the making of the show and found it lacked any semblance of original thought.
Like all good sci-fi, Black Mirror reflected our present into the future, but in the four years since the last run of episodes on Netflix the world seems to have become so dystopian that you couldn’t make it up.
The pandemic forcing everyone indoors, the riots on Capitol Hill fed on social media conspiracy, the rise and rise of generative AI, entrepreneurs commercializing space, and, of course, the metaverse.
The Emmy-winning anthology series is back and writer-creator-showrunner Charlie Brooker has been talking about how he took the opportunity to mix things up.
“It feels like the dystopia is lapping onto our shores at the present moment,” he told GQ’s Brit Dawson of the five-episode instalment.
“I definitely approached this season thinking, ‘Whatever my assumptions are about Black Mirror, I’m going to throw them out and do something different,’” Brooker said.
This included more comedy, particularly in the episodes “Joan Is Awful” and “Demon 79,” a horror story subtitled “Red Mirror” that draws on staples like Hammer and the work of Dario Argento.
“I sort of circled back to some classically Black Mirror stories as well,” Brooker said. “So it’s not like it’s a bed of roses this season. They’re certainly some of the bleakest stories we’ve ever done.”
He’s also perhaps not as wary of the future or of technology as his Black Mirror persona might suggest. He recalls how frightening it was in the 1980s during the height of the nuclear cold war.
“That didn’t quite happen! The other thing I would say, I do have faith in the fact that the younger generation seem to have their heads screwed on and seem to be pissed off. So that’s going to be a tsunami of people, it’s just that they’re not at the levers of power yet,” he says.
“We have eradicated lots of diseases and generally lots of things are going well that we lose sight of but it’s just a bit terrifying if you think democracy is going to collapse. That and the climate breaking down.”
In another pre-season interview with Amit Katwala at Wired, Booker continues, “I am generally pro-technology. Probably we’re going to have to rely on it if we’re going to survive, so I wouldn’t say [Black Mirror episodes] necessarily warns, so much as worries, if you know what I mean. They’re maybe worst-case scenarios.”
Three of the five episodes are set in the past, with seemingly no connection to the evils of the internet from past seasons.
“I think there was a danger that Black Mirror was becoming the show about consciousness being uploaded into a little disc,” Brooker explains to Emma Stefansky at Esquire. “Who says I have to set this in a near-future setting, and make it all chrome and glass and holograms and, you know, a bit Minority Report?” he asks. “What happens if I just set it in the past? That opens up all sorts of other things.”
However, one episode does openly “worry” about a near-future in which AI takes control of our lives in ways we hadn’t imagined. “Joan Is Awful” is about a streaming service called Streamberry — cheekily mirroring Netflix and clearly with the streamer’s consent — that makes a photoreal, AI-generated show out of a woman’s life.
It was specifically inspired by The Dropout, the ABC mini-series about Theranos founder and convicted fraudster Elizabeth Holmes, along with the possibility of all of us having the ability to generate personalized media using AI. Except in Black Mirror’s take this is another example of Big Tech using our private data for the entertainment of others.
People prefer viewing content “in a state of mesmerized horror,” the CEO of Streamberry says in the episode.
“Obviously the first thing I did was ask [generative AI] to come up with a Black Mirror episode to see what it would do,” Brooker told GQ. “What it came out with was simultaneously too generic and dull for any serious consideration. There’s a generic quality to the art that it pumps out.
“That was the first wave [of generative AI], when people were going, ‘Hey, look at this, I can type “Denis Nilsen the serial killer in the Bake Off tent” into Midjourney’ and it’ll spit out some eerily, quasi-realistic images of that, or ‘Here’s Mr. Blobby on a water slide,’ or ‘Paul McCartney eating an olive.’
“It’ll be undeniably perfect in five years, but at what point it’ll replace the human experience? It does feel now like we’re at the foothills of new, disruptive technology kicking in again.”
In an interview with Vox and Peter Kafka’s Re/code podcast, Brooker put his thoughts on generative AI and emerging tech another way: ‟It’s like we’ve suddenly grown an extra limb, which is amazing because it means you could juggle and scroll through your iPhone at the same time. But it also means that we’re not really sure how to control it yet.”
In general, it seems more accurate to say that Brooker is skeptical about people, rather than certain technologies. ‟Usually our technologies give with one hand and sort of slap us round the back of the head with the other,” he told Kafka. He thinks people tend to be the problem, rather than the tech, so ‟I wouldn’t want to delete this stuff from existence necessarily.”
The episode “Loch Henry” is set in the present day, following a pair of documentary filmmakers who plan to give a shocking hometown murder the lurid true crime treatment. “Loch Henry” relies on VHS tapes to build its narrative, instead of a smartphone app or webcam.
“It’s a weird one, because it is about the archive of the past that people are digging into,” Brooker tells Stefansky. “But it is also about the way all that stuff is now hoovered up and presented to you on prestige TV platforms — that we’re mining all these horrible things that happened and turning it into a sumptuous form of entertainment.”
He continues, “There’s nothing more frustrating than when you’re watching a true crime documentary, and it starts to dawn on you somewhere around Episode 3: They’re not going to tell me who did this. Not what I want. I want to see an interview with the killer. Go and generate one on ChatGPT.”
At Wired, Katwala speculates that maybe the next step is personalized content about personalized content. Society and social media has been moving in this direction for years, he says.
“One of the supposed benefits of generative AI is that it will enable personalized content, tailored to our individual tastes: your own algorithmically designed hell, so horribly well-targeted that you can’t tear your eyes away.”
But, he wonders, what happens to cultural commentary when everyone is consuming different stuff?
The irony is that while hyper-personalized content might be great for engagement on streaming platforms, it would be absolutely terrible for landmark shows like Black Mirror and Succession, which support a whole ecosystem including websites like Wired and NAB Amplify.
“We siphon off a portion of the search interest in these topics, capitalizing on people who have just watched something and want to know what to think about it. This helps explain the media feeding frenzy around the Succession finale and why I’m writing this story about Black Mirror even though we ran an interview with the creator yesterday,” Katwala argues.
“In a way, you could see that as the media’s slightly clumsy attempt to replicate the success of the algorithm.”
Web3 for Media and Entertainment, From Ownership to Distribution
TL;DR
The 2023 NAB Show presented a fireside chat with three industry leaders offering unique perspectives on the adoption of new Web3 experiences, shifts in the digital supply chain for premium content, and the use and adoption of NFTs across Media & Entertainment.
Moderated by Jay Williams, COO of the Infinity Festival, the panel featured Larry Johnson, global senior director at Oracle Strategic Clients Group; Michelle Munson, co-founder and CEO of blockchain network Eluvio; and Seth Shapiro, founding chair of the NAB Web3 Advisory Council and partner at Alpha Sigma Capital.
Before the promise of Web3 can be realized, the technology needs to be divorced from the early concept of the “right-clickable” NFT.
Friction and disillusionment are the two biggest roadblocks to success for Web3, the panelists agreed, but the technology underpinning it has the potential to be completely transformational.
Watch the full NAB Show 2023 session, “Web3 in Media & Entertainment from Ownership to Distribution: A Fireside Chat with Industry Leaders” above.
The 2023 NAB Show reminded us that Web3 and blockchain technologies have become more relevant than ever. A panel of industry leaders, including representatives from Eluvio and Oracle, discussed the transformative potential of Web3 for the Media & Entertainment industry, exploring how the blockchain will enable new opportunities for content distribution, fan engagement, and monetization of new multimedia NFTs and Web3-native interactive experiences. They also consider use cases for what’s currently working in Web3, and what isn’t.
The session was moderated by Jay Williams, COO of the Infinity Festival, and featured Larry Johnson, Global Senior Director of CME Industry Advisory Media & Entertainment at Oracle Strategic Clients Group; Michelle Munson, co-founder and CEO of utility blockchain network Eluvio; and Seth Shapiro, founding chair of the NAB Web3 Advisory Council and partner at Alpha Sigma Capital.
Watch the full session in the video at the top of the page.
While many view Web3 as mere hype, a decentralized internet based on blockchain authentication has great potential as a distribution platform for content, the panelists argued. But before that can happen, the technology needs to be divorced from the early concept of the “right-clickable” NFT.
Digital collectables as a concept were initially very intriguing, Shapiro said, but the reality has been much more mundane. “If you’ve ever been in the art world or dealt with consumer products, the idea that something can be proved to be authentic or inauthentic. Or the fact that a painting was owned by Jack Nicholson and then sold would increase its value over for generations potentially,” he detailed. “But then, when NFTs actually came into the popular culture, it became a JPEG that you sold for a lot of money, that you could often go up to a server and pull down yourself.”
Munson outlined Eluvio’s eco-friendly blockchain platform that enables content creators to store and stream content. “The notion of a non-fungible token, which is a great instrument on a blockchain to encode the ownership, what we call the provenance of a given asset, is a terrific and highly scalable technology, and can come in many forms,” she said.
“If you know this space, you know our customers are very concerned about churn, building fan experiences,” Johnson remarked about the enterprise commitment to Web3. “So that’s why they want to weigh in this in the space to help our customers and fans and community all in on Web3.”
The current NFT marketplace is filled with friction and disillusionment, Munson said. “When you get down to it, the thing, the asset, that that non-fungible token proof of ownership refers to is somewhere else. And the somewhere else is a loose tie,” she explained. “A perfect example of that is a URL to some media that you can then go and obtain. And to date, most of the Web3 world that has leaned into NFT technology has been built around some kind of token here, and media over here with no real intrinsic tie. And you see the problem.”
Shapiro described the trail of disillusionment that led from Web1 and Web2, commenting, “We’re very solidly there now, and the same thing happened the last few times where people went, ‘This is a sham and a mockery and it’s all lies,’ and it’s all, you know, imbeciles and Ponzi schemes. Were back there now. But the technology is completely transformational.”
Friction in particular is a major issue, the panelists agreed. “It’s hard to get at the media in relation to the token, [and] that’s where a lot of the friction comes in, when you have to sign on with the wallet, manage signatures, and you still don’t have the asset that you really wanted,” Munson said.
This friction, said Shapiro, reminded him of the dot-com era, when people where still skeptical about making online purchases. “There was a huge controversy about whether people would ever enter credit card into a browser,” he recalled. “Those of us who thought that they would were kind of castigated for a while and, of course, obviously, you don’t think about it anymore. But think about that being sort of where we are right now. This stuff is so murky and kind of janky and hard to use. And the rules seem very opaque and people lose their money.”
Intermediary marketplaces are another roadblock Web3 technologies seek to overcome. “That intermediary party is actually the thing that is determining the ownership, not the actual contract pointing at the media,” Munson said. “So you can see this whole ream of problem on the other side of opportunity.”
Evan Shapiro: What’s Changing (and Changed) in Media Consumption
TL;DR
Media cartographer Evan Shapiro delivered the closing keynote address at Streaming Media East, providing a close look at the latest consumer data in the global streaming marketplace.
It’s critical for stakeholders to understand shifts in demographics in the world’s population, Shapiro says, in order to identify untapped markets.
Younger generations are more likely to pay for services they want, he says, because they’ve been trained to do that since birth.
Shapiro predicts that the current advertising recession in the US will end in July, but that doesn’t necessarily mean the outlook is rosy.
Evan Shapiro loves to talk. And that’s a good thing, because his insights into the Media & Entertainment industry are invaluable, plotting the effects of disruption as the streaming universe changes its gravitational pull and reforms itself around new business models. The media cartographer and ESHAP CEO — known for his detailed maps and accompanying analysis charting the media universe — provided a close look at the latest consumer data at the Streaming Media East conference. You can watch the full session, “The Mind of the Modern Media Consumer,” in the video below.
Shapiro spoke about shifts in consumer demographics, how streaming is changing the television landscape, and survival tactics for an increasingly volatile ecosystem. He also predicted that the current advertising recession will end in July, with new data to back up that claim, and identified the biggest areas for growth.
“Constant disruption is now the operating system of our ecosystem,” Shapiro said in his opening remarks, noting that his job is to help stakeholders survive what he calls “the current media apocalypse.”
In a global marketplace, Shapiro said, it’s important to understand the demographic changes in the world’s population. “One of the things that you have to know is that the population on the planet Earth is completely different now from what it is when most of us were growing up,” he said, pointing out that 63% of the world’s population is now under the age of 40. “So If you’re over the age of 40, you’re in a minority for the first time in your life.”
But what’s even more critical to understand, says Shapiro, is how the demographics break down across regions: 33% of the world’s population is under 20, but that number changes drastically by region. In North America it’s 25%, and in Europe it’s 21%, but in Asia and Latin America that figure jumps to 32%, and in Africa it’s a whopping 51%.
“Half of the fastest growing economies on the face of the earth are in Africa,” he says. “So think about the world as you think about your business, look for opportunity outside of where you’re operating today.”
Shapiro also tackled misconceptions that younger consumers don’t want to pay for services. That’s “absolutely untrue,” he said, “The younger consumer is more apt to pay for services. They want to pay for media. We’ve raised three generations, the youngest generations, the largest generations, the most diverse generations on the planet, to pay for their media.”
He also pointed out that, according to surveys, the most important things to young subscribers are “relevant content, original content, refreshed content, and library size,” all of which are ranked as more important than cost.
Not thinking globally is a sure business-killer, Shapiro insists, using Roku as an example. “Roku is the number-one platform for streaming television on the planet Earth,” he said. “They’re not in the top five anywhere outside of the United States. So if you think about how to lose your business, focus only on America. And if you wanted to look at a good case study, look at Roku’s stock price over the last 24 months. Not a good story. Right? They moved far too late to go global, and it’s killing them right now.”
Companies like Samsung and Google, Shapiro said, are the new gatekeepers, and “in many cases, also your direct competitors.”
Samsung has already moved into publishing content, he said, noting that that streamers have to be included in the company’s home screen offerings to be considered a global publishing brand.
“Google is the direct competitor and an aggregator of yours. So this idea that we have these ‘frenemies’ I am now rebranding them — or trying to — into collaborators. You’re competing with them. And you also have to bear hug them, you don’t have a choice,” he continued.
“The idea that you’re going to be able to operate around these collaborators, specifically, Google is adorably naive, it’s just not possible. So you have to think about a way to both compete with them, and collaborate with them simultaneously.”
But what about that ad recession, you ask? Shapiro had plenty to say about that, along with some cold, hard data to back everything up. The upshot is that churn is the killer of advertising. “Serial churning is the new channel changing,” he said, and “when somebody cancels a subscription, not only do [streamers] lose that subscriber revenue, they also lose those ad impressions.”
In Q4 of last year, Shapiro noted, SVODs signed up 41.3 million new subscribers and lost another 33.8 million subscribers, an 82% loss. “That’s a shitty business, and, by the way, not great for advertisers,” he commented.
The US advertising market has experienced nine months of decline, Shapiro observed, but “bold prediction for you — the ad recession will end in July.”
Math, he says, has provided the answers. “The reason why we’re in an ad recession is primarily due to the comparison to the year prior,” he says, looking at ad revenue from the first quarter of 2022.
“It’s hard to keep that pace up,” he adds. “Now in July, the comparison is going to be a lot easier than it was a year ago. The ad recession is going to end magically in the middle of this summer, because math, and you’re going to hear about how advertising is back this fall.”
The outlook isn’t entirely rosy, however, Shapiro warns: “What’s going to happen though, is not all the ads are going to be shared equally amongst all the players. Because gravity sucks. Ads are going to go where the ads are most effective; ads are going to go where the ads are already working.”
How the Cinematography of “Succession” Takes You Inside
TL;DR
In the wake of the season finale of “Succession,” cinematographer Patrick Capone, ASC, series director Mark Mylod and senior colorist Sam Daley reflect on what made working on the series so special.
Both Mylod and Capone tried to not get in the way of the action that unspooled in the pivotal boardroom meeting in the final episode, letting it unfold as naturally as possible.
Following the devastation of the Season 4 finale, fans of Succession waited for a hint of more from the Roys. In its wake, we looked for signs of an offshoot story from supremo Jesse Armstrong or an origin tale, maybe examining the depths of how Marcia had earned her ultimate prize or how Kendall would finally fit the half-Loganized skills he had begun to exhibit. But like the wait for a concert encore, the creeping realization arrived that that was that.
But Succession fandom means never having to apologize for picking over seasons, episodes, or even furtive glances contained in scenes. We’re all guilty of that, with enough room left over for even more dissection.
One of the show’s cinematographers, Patrick Capone, ASC, who had worked on the HBO series since its first season in 2018, recently commented on his time there. He summed up his postpartum feelings. “Listen, we’ve all had jobs that we love working on, and they were lousy movies or really good movies, but they were tough to work on. This is the perfect storm. It was great people. It had phenomenal scripts. I had a voice. I’m proud of my work. We traveled all around the world. And hopefully, there’ll be another job like this somewhere.”
Capone was talking to American Cinematographer’s Iain Marcks immediately following the show’s final grading session with senior colorist Sam Daley; they were both feeling raw and emotional, which made talking with them even more heartfelt, but they were ready to genuflect. It’s a poignant interview.
Marcks began with a question that hung in the air, “So what is there to say about the making of Succession that hasn’t already been said?”
Both interviewees settled on tributes to the medium of film. All four seasons had stuck to celluloid, but a world without Succession may mean tough times ahead, especially for New York’s film production community. Sam Daley, “I realized I don’t know any other episodics that primarily shoot film. I know of a few that may have done it for certain scenes or episodes, but primarily as the acquisition source. I thought it would be nice to remark on that so that people can appreciate when they watch our, you know, our final episodes that, you know, a lot of time and effort and craft went into making these images that were captured on film.”
Capone added a layer of worry to Daley’s point, “And my concern with Succession ending is we have enabled smaller films in the New York area to be able to shoot film because there is an active lab that’s staying above water because we shoot 30,000 feet a day. So I’m concerned with this lack of volume going to the Kodak lab, which might affect the film community in New York City if, indeed, this lab can no longer stay above water.”
The show used (Kodak Vision 3 5203) 50 daylight. “But often we had to use the (Kodak Vision 3 5219) 500 tungsten depending on where we are, what type of day it is,” Capone added.
In another post-finale session, Capone shared his thoughts with series director Mark Mylod. They spoke with Chris Murphy at Vanity Fair about the showdown of the Roy siblings, especially the scene where Shiv hurriedly exits the boardroom with the final vote for control of Waystar Royco hanging in the balance.
“We wanted to use the reflections and the glass bowl within the glass bowl within the glass building,” says Capone. The result is the Roy children on full display, putting on a show of their worst qualities and deepest insecurities in front of both board members and their employees. “With camera placement and the actors blocking, we were able to make their fight kind of on a little bit of a stage where the board members can see it, and yet they can’t hear it,” continues Capone. “Only we can hear it until it gets vocal toward the end.”
“It was probably the most important scene of the entire series,” added Mylod.
Both Mylod and Capone tried not to get in the way of the action that unspooled, letting it unfold as naturally as possible. “That’s the beauty of our style,” says Capone, “It’s just so subjective, and the camera has the ability to point the audience where we feel they should go to get what’s being told, like a fly on the wall.”
To properly capture the end of an American dynasty, finding a location that was appropriately devastating and stately was important. “We found these two offices. The main boardroom, and we found this other office that I had never shot in,” says Capone. “It’s one of the World Trade Center buildings. I want to say it’s number seven. And it’s about 35 floors up.”
To create that fishbowl effect, Capone had to change some of the overhead lighting in both the boardroom and the adjacent room where the siblings fight, filling the overhead with Astera lights. “When you show up at these buildings, you have no idea what kind of day it’s going to be,” Capone said. “It could be a hot, sunny, cool, cloudy day. So that was important to us and to be able to use as much negative fill as we could.”
“Some of my favorite shots are very low depth of field, and the figures in the background through the glass are almost ET-like,” Capone continues. “They’re very, very thin slivers of line out of focus, and with the light from outside just wrapping around them. And I love stuff like that. It gives it a lot of depth.”
Capone and Mylod also unpacked their Succession journey with IndieWire’s Sarah Shachat, sharing anecdotes with a focus on Logan’s funeral in Episode 9, “Church and State.”
“The tonal tug between tragedy, comedy, and a sort of breathless incredulity is both very funny and very sad and is often carried through camera movement and the way the operators find character reactions. Multiple cameras covered all the eulogies at once, but the energy of the performances impacts how shaky or stable the camera feels to the viewer,” Shachat writes.
“Even Ewan’s (James Cromwell) eulogy was handheld, but it’s like a dance. When you feel that it’s emotional when you feel that Roman’s becoming unhinged, you know we feel a gut instinct (to have) more movement and move around him more. Ewan was a more stable foundation of a eulogy, so even though we’re still handheld, we didn’t feel necessarily that we had to dance around him as much,” Capone said.
The shift to Kendall, who has not had a great day with the women in his life, and the choice to emphasize his non-reaction to that moment, was also the kind of unplanned kismet that comes from leaving the camera operators open to where the emotion of a scene takes them. “Mark Mylod and I are sitting next to monitors all the time, and we’ll know something’s coming up, or the operators do it instinctively,” said Capone.
“At one point, we pretty much did a 20-minute take, similar to the day Logan died,” he continued. “The A camera operator and assistant had two cameras, and as soon as one ran out of film, they would just pick up another (camera body), and another team would reload it for them. We had five cameras at the funeral but six bodies because we would flip the A camera and keep going.”
If you wanted an educated take from this superb television series, it would be how we, as the audience, witnessed these selfish monsters’ often terrible behavior.
Many words and videos have endorsed how the cameras move through the Succession story; it’s ultimately a cynical eye, an anti-capitalist stance. Capone explains the original idea, “Billionaires cannot control the weather, they cannot control health, things like this. So the fly on the wall camera effect was the rest of the world watching these billionaires,” he says.
“They have no idea how good they have it. So we tried to create a naturalistic environment of classic films where the actors could move around. This is the most amazing ensemble I’ve ever been exposed to. The operators, I feel, fall into that ensemble,” Capone continues.
“Cinematography is more than just lighting for the cinema. Cinematography is camera placement, camera movement, and the ability to take the audience and point them in the direction that you think they should be watching. And that’s what we do so well, I think.
“So, we like mistakes, but we like mistakes that just happen to real life, that happen a lot of times. Whether it’s a late focus or a camera getting to an actor moment after his word. I was brought up in a classic cinema business where some DPs said, you know, the crosshairs have to be just to the left of the nose, and this far, and the headroom is here, and the horizon has to be there.
“If you look at artwork, there are no rules. You just need to have an image that helps tell that story and, more importantly, the emotions of that moment. And I think that’s something we’ve done pretty well, and people have picked up on. And they think they’ve picked up on things where they’re totally wrong. But other times, they pick up on things that are right on.
HBO’s “Succession” takes cues from Dogme95, cinema verité, and other styles that use documentary techniques to create fictional stories.
June 13, 2023
AI Can Be An Awesome Creative Assistant, Here’s How
TL;DR
Harvard Business Review has three takes on AI’s potential future. In the first, AI is our creative sidekick, helping us innovate faster by assisting in content creation.
The dystopian second scenario has AI dominating creativity, potentially drowning out human creators. This might reduce innovation, and the personalization could lead to a more divided society.
The third scenario predicts a renewed “techlash” against synthetic creativity. As a result, people may begin to value human creativity more and be willing to pay a premium for it.
Currently, generative AI seems to work best with human partners, even acting as a catalyst for human creativity.
The Harvard Business Review outlines three scenarios that companies should prepare for in the brave new world of AI.
The more optimistic one is that AI will be your best creative assistant. AI tools will give us all a lift away from less menial work to be able to concentrate on the more interesting work of management and curation.
The authors, David De Cremer, Nicola Morini Bianzino and Ben Falk, suggest that we learn “prompt engineering” — the skill of asking the machine the right questions — to produce more relevant and meaningful content that humans will only need to edit somewhat before they can put it to use.
Overall, this scenario paints a world of faster innovation where machine-augmented human creativity will enable mainly rapid iteration.
A doomsday scenario is if machines monopolize creativity. Here, human writers, producers and creators are drowned out by a tsunami of algorithmically-generated content, with some talented creators even opting out of the market.
If that would happen, then an important question that we need to address is: How will we generate new ideas?
“In this scenario, generative AI significantly changes the incentive structure for creators, and raises risks for businesses and society,” De Cremer, Bianzino and Falk caution. “If cheaply made generative AI undercuts authentic human content, there’s a real risk that innovation will slow down over time as humans make less and less new art and content.”
This could also mean fundamental changes to what content creation looks like. If production costs fall close to nothing, that opens up the possibility of reaching specific — and often more marginalized — audiences through extreme personalization and versioning.
That sounds like a bonus — except that if enhanced personalized experiences are applied broadly, “then we run the risk of losing the shared experience of watching the same film, reading the same book, and consuming the same news,” says HBR.
“In that case, it will be easier to create politically divisive viral content, and significant volumes of mis/disinformation, as the average quality of content declines alongside the share of authentic human content.”
The third potential scenario that the HBR authors consider as possible is one where the “techlash” resumes with a focus against algorithmically-generated content. In this scenario, humans maintain a competitive advantage against algorithmic competition.
“One plausible effect of being inundated with synthetic creative outputs is that people will begin to value authentic creativity more again and may be willing to pay a premium for it.”
It follows that political leadership taking action to strengthen governance of information spaces will be needed to deal with the downside risks that could emerge. For instance, content moderation needs are likely to explode as information platforms are overwhelmed with false or misleading content, and therefore require human intervention and carefully designed governance frameworks to counter.
Is computational creativity possible? A pair of academics have come up with a reassuring answer. Chloe Preece and Hafize Çelik write in The Conversation that the key characteristic of AI’s creative processes today is that computational creativity is systematic, not impulsive, unlike humans. Generative AI is programmed to process information in a certain way to achieve particular results predictably, albeit in often unexpected ways.
“In fact, this is perhaps the most significant difference between artists and AI: while artists are self- and product-driven, AI is very much consumer-centric and market-driven — we only get the art we ask for, which is not perhaps, what we need.”
Preece and Celik conclude that, so far, generative AI seems to work best with human partners — even acting as a catalyst for human creativity.
“Art history shows us that technology has rarely directly displaced humans from work they wanted to do. Think of the camera, for example, which was feared due to its power to put portrait painters out of business. What are the business implications for the use of synthetic creativity by AI, then?”
They note that AI has been known to “hallucinate” — an industry term for spewing nonsense — and that human skill is required to make sense of it — “that is expressing concepts, ideas and truths, rather than just something that is pleasing to the senses. Curation is therefore needed to select and frame, or reframe, a unified and compelling vision.”
A similarly optimistic view is held by Ahmed Elgammal, professor at the Department of Computer Science at Rutgers University. Writing at Science Focus, he says the current generation of AI is limited to copying the work of humans and that it must be controlled largely by people to create something useful.
“It’s a great tool but not something that can be creative itself,” Elgammal says. “We must be conscious about what’s happening in the world and have an opinion to create real art. The AIs simply don’t have this.”
Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art
The Metaverse Unleashed: Exploring the Convergence of Entertainment and Enterprise
TL;DR
A panel of industry leaders at the 2023 NAB Show delved into the potential for immersive experiences at the intersection of entertainment and enterprise, with an emphasis on virtual worldbuilding.
Moderated by MetaCities executive producer Gregg Katano, the panel included EndeavorXR founder & CEO Amy Peck, Wild Capture co-founder and head of studio Evan Pesses, and worldbuilder & strategist Rachel Joy Victor.
The idea of the metaverse has evolved from Neal Stephenson’s “Snow Crash” to a spatial computing platform that provides digital experiences, including digital twins of cities and completely abstract, fantastical worlds.
Interoperability remains crucial for the metaverse to function, allowing companies and brands to deploy content in a range of formats and experiences.
Watch the full NAB Show 2023 session, “Metaverses for Entertainment and Enterprise: Capitalizing on Worldbuilding” above.
Once primarily a playground for the media and entertainment industry, AR, XR and other immersive experiences are taking off in the enterprise sector, offering engaging, interoperable virtual worlds that are revolutionizing business operations and experiences in real estate, manufacturing, healthcare and more. A recent panel of industry leaders at the 2023 NAB Show delved into the potential for immersive experiences at the intersection of entertainment and enterprise, with an emphasis on virtual worldbuilding.
Entitled “Metaverses for Entertainment and Enterprise: Capitalizing on Worldbuilding,” the panel discussion was held in the Capitalize Experiential Zone at the LVCC’s West Hall. It was moderated by Gregg Katano, executive producer at metaverse event production company and consultancy MetaCities, and included EndeavorXR founder & CEO Amy Peck, Evan Pesses, co-founder/head of studio at Wild Capture, and worldbuilder & strategist Rachel Joy Victor. Watch the full session in the video at the top of the page.
The panelists explored the potential for immersive experiences in both entertainment and enterprise, discussing how virtual worldbuilding can be deployed across business sectors and how these experiences can be monetized.
Katano kicked off the session by pointing out that the idea of the metaverse has evolved from Neal Stephenson’s Snow Crash, which conceived a three-dimensional virtual space where humans represented by programmable avatars can interact with one another, to “a spatial computing platform that provides digital experiences,” including digital twins of cities and completely abstract, fantastical worlds, “anything that our wildest dreams can come up with.”
Peck, who advises companies on developing immersive experiences in her role at EndeavorXR, said that it’s important to dispel the hype surrounding these new technologies, which leads brands to leap into making investments without a clear strategy. “We’re in this interesting moment where technology gives us superpowers,” she said. “We were in a mindset of incremental change, and leveraging technology to optimize incrementally, but we have the opportunity to really leapfrog into the future. And we have agency in that future, especially in aggregate.”
When we talk about the metaverse, Victor said, the conversation usually surrounds the visual aesthetic of building virtual worlds. “And that’s kind of what the metaverse is,” she continued. “But I think the real unlock of the metaverse is really a computational unlock of being able to have this really bespoke data when we’re working within the context of volumetric worlds, or when you’re translating to the spatial web of having really targeted data of people as they move about the physical world and city space.”
There are two sides to this data, Victor noted, user data, and “this really in depth data about the world space.” When those two pieces are put together, “you can have emergent experiences that take both things into account,” she said.
“So if we’re talking about this virtual world that exists, and we know that each kind of thing that exists within that world, every asset, everything is a computational data point. And now the user as they’re moving through that world is also a data point. It’s not a linear experience. It’s not even a nonlinear experience. Now, it’s this whole network of possibilities. And each thing that the user interacts with is very responsive. It’s just a very targeted type of experience, a targeted way of thinking about narrative now.”
Wild Capture, a suite of AI-driven tools that enable the integration of volumetric video and realistic digital humans, is an integral part of creating the special environment that will form the backbone of the metaverse, Katono said.
“We’re all about digital human technology,” said Pesses, describing how Wild Capture helps people bring assets into digital worlds. “What we believe is interoperability. We believe that the same asset goes all the way from the highest VFX film all the way to WebXR on your phone all the way to AR,” he continued. “We believe that the same asset not only goes to all those places, but can be handed off to different artists to do different things, as well as the opportunity for businesses to have multiple licensing opportunities.”
There are different ways to tap into what interoperability means for your specific IP or brand, said Victor. “Something that happens a lot is there’s this pressure of ‘Everyone is doing this, and I need to jump into this space. So let me see what everyone else is doing.’ But I think it starts from that brand story, from that IP story perspective.”
One important first step, said Victor, is understanding the story your brand is already telling, interrogating the mechanics involved along with what your audience is already interested in. “A lot of times, if you’re talking with a brand like Nike, they already have a really great community story around competitive acquisition of a product,” she said. “And so that translates into a mechanic of here, let me let me put out these digital assets and see who wants to procure them. But if you’re another shoe brand, that might not be the same mechanic that you know, that your consumers are used to participating in. So I think it’s about that point of translation, understanding how to be authentic and in your story.”
Virtual production technologies are no longer “experimental,” reaching a level of maturity to justify the claims of cost savings and time efficiency.
A virtual production workflow is a new approach to filmmaking that could transcend any incremental changes in underlying virtual production technologies or tools.
A new white paper from NEP Virtual Studios and Prysm Stages takes a closer look at the budget, scheduling, and team impacts of virtual production during pre-production, production, and post-production, and the implications for workflows.
While virtual production has not yet reached the commodity stage, we are certainly beyond experimentation and into stable and scalable deployment.
The tools and technologies behind virtual production including game engines, AR, real-time compositing, and In-Camera VFX, “are now proven and repeatable” according to Entertainment Technologists in a new white paper sponsored by NEP Virtual Studios and Prysm Stages.
“Virtual production technologies have reached a level of maturity where there is stability, choice, and sufficient case studies to justify the claims of cost savings and time efficiency,” the report states.
“Film producers and studios should be considering using the technology on their next productions. Not every shot and every show, but certainly enough to start the educational journey and build trust.”
Virtual production is the first technology that spans across the stages of production, breaking down silos between teams, workflows, and tools. Cr: Entertainment Technologists
The report’s overriding concept is that it is by combining technologies into a new virtual production workflow that offers the most creative and efficiency gains.
“We make the case that individual VP technologies in isolation are an interesting evolution of the production process, but it’s when they are combined that they offer a potential revolution. It is only if these new technologies are systematically applied in a new workflow process that we can take full advantage of them.”
Virtual production covers not a singular technology but a combination of integrated systems, and there’s really nothing virtual about it. (It’s not a great descriptive name but it’s what the industry has settled on.)
“Virtual production” is not virtual but real, physical production, on a set, with cameras, microphones, actors and props all combined with real-time visualization workflows.
The key distinction with legacy production is that virtual production is enabled by a suite of new and emerging technologies that combine physical and digital elements on-set, in real time, to enable real-time feedback and iterations.
The report defines “virtual production workflow” as a change in the production approach rather than any specific change in technology. A robust VP workflow process will support, adapt and scale to continuous changes in underlying virtual production technologies.
“Adopting this more holistic process now not only future-proofs a producer and production team from changes during their current project but also builds the skills for the future disruptions that are coming as real-time collaborative pipelines will continue to transplant slower offline rendering and iteration processes.”
However, this new approach requires diligence in planning and preparation to enable the gains — especially in allocating time and budget during pre-production for asset creation.
Where Sony Is Taking Virtual Production Development (Next)
TL;DR
Sony expands its Digital Media Production Center at Pinewood Studios with the first virtual production unit in the UK using Sony Crystal LED technology.
The new Pinewood virtual production stage employs Sony’s ultra-thin pixel pitched 1.2mm and 1.5mm Crystal LED panels, allowing for closer camera placement than the industry standard, which is currently 2.6mm.
Sony also recently unveiled its Virtual Production Toolkit, a suite of software products that act as a failsafe for working with LED volumes.
Sony’s recent NAB Show and post-NAB Show announcements cements its central role in the virtual production industry. Not only has Sony opened a dedicated LED volume in Pinewood Studios outside of London, it has also released a range of production-specific products that advances the workflow.
Sony has expanded its DMPC or Digital Motion Picture Centre demo space at Pinewood Studios into an LED volume to showcase its latest Crystal LED panels and demonstrate how its new products work with the Venice range of cameras.
The opening follows Sony’s partnership with Studio de France to equip Europe’s first virtual production studio early last year in Seine Saint Denis, north of Paris.
But as well as showing customers its new Crystal LED panels, at the DMPC visitors can see how the new products work with them, in particular its ultra-thin pixel pitched 1.2mm and 1.5mm panels. The current market standard is judged to be 2.6mm.
Virtual Venice
The first of the new products announced at NAB is what Sony is calling the Virtual Production Tool Set, a suite of software products that act as a failsafe for working with LED volumes.
There are two parts to the toolkit; the first is a camera and display plugin for Unreal Engine that works in a number of ways. When users open the plugin they will see a virtual Venice camera interface with pull-down menus for its feature set. Users can virtually adjust those parameters, exporting them to the camera once they’re ready to shoot.
So this is a previs tool for this particular camera. As it’s part of the Unreal Engine, users can start to simulate shooting scenarios. To help with this, they’ll see a mannequin-type figure inside the Unreal Engine scene, which produces reflections from the virtual lights the user turns on. The idea is that users rehearse the shots they have planned, save the settings, and then export them to their Venice camera.
Deciding on an exposure setting or lens choices ahead of a shoot should give users a better sense of accuracy than before. “Fix it in pre” they’re calling it.
Another part of the toolkit is a Moiré alert. Moiré is a big issue when shooting in LED volumes and can ruin a shot. It’s a problem mainly for CMOS sensors when shooting highly repeatable patterns on clothing or brick walls, for instance. But Sony’s new tool warns users when Moiré is about to appear, a by-product of how close the screens are and the resolution they have.
Again, this feature is available inside the Unreal plugin when users are simulating their shooting — but this time available for cameras other than the Venice. Users will have to fill in some parameters like LED type, and contrast ratio with the correct pixel pitch of the panel. Other changeable parameters include the color of the alerts. Red is usually selected for “Moiré is present” warnings.
It’ll already know the sensor and lens to give it depth of field information. As users move the camera virtually they will see a series of different colors from green, which means no Moiré, to yellow, “a chance of Moiré,” to red, “you are experiencing Moiré.”
Color Calibrator
Usually calibrating a camera against an LED volume takes around two hours to complete. Sony’s new calibrator is claiming a 15-minute process to do the same thing. The new calibrator will play a series of test patterns on the wall recorded on the camera. The resulting clip is imported into the calibrator and through a series of alignments and corrections, a profile is produced. This profile is then exported as a 3D LUT which in turn is imported into Unreal Engine and applied as a .look file to the LED wall.
Sony’s new products, including the recently launched 1.2 and 1.5mm pixel pitch panels, mean real forward movement in LED volume development. The increased pixel pitch panels allow cameras to get closer to walls; a 1.5mm pixel pitch puts the camera’s distance from the volume at around three meters. This could also mean that stages could become smaller with the same visual results. A 2.5mm pixel pitch puts the camera at eight meters from the wall, making the stage proportionally larger.
With tools like the color calibrator, set-up times can be shorter and be completed by non-color specialized staff, saving time and staffing costs.
“Closing the Knowledge Gap”: Who Needs to Know What with Virtual Production
TL;DR
The 2023 NAB show was the scene of a top-level discussion by major industry players who have been involved in the development of virtual production.
Moderated by Eric Rigney, the panel discussion featured UNLV founding artistic director Francisco Menendez; James Blevens, VP consultant for “The Mandalorian”; ETC’s Erik Weaver; Rochester Institute of Technology professor David Long; and Local 600 Camera Guild DIT and production technologist Dane Brehm.
Virtual production isn’t just a backdrop, but is instead an entire studio space that filmmakers need to understand when designing shots in order to maximize VP’s strengths and mitigate its downsides.
Now that virtual production is an established part of the production landscape, more people from more spheres of the industry require answers about best practices for its workflow.
Who exactly needs to understand the finer points of setting up virtual productions?
What do filmmakers intending to work with LED walls need to know about lighting, color space, frame rate, shutter angle the relation of pixel pitch to blocking? What other variables are integral to getting the most out of the process?
This year’s NAB show was the scene of a top-level discussion by industry players who have been involved in trying to answer these very questions.
Moderator Eric Rigney, instructor at Mo-Sys Engineering Academy and a member of ETC (the Entertainment Technology Center), led a discussion on the topic with Francisco Menendez, founding artistic director at UNLV, which has just installed its own digital wall with plans to do four VP productions per semester; James Blevens, VP consultant for The Daily Show and The Mandalorian; ETC’s Erik Weaver, project leader of their VP short film, Fathead; David Long, professor at Rochester Institute of Technology; and Dane Brehm, Local 600 Camera Guild DIT and production technologist, who has been educating filmmakers about various aspects of VP for more than a year. Watch the full session below.
Menendez stresses that virtual production “is not a backdrop.” Rather, it is an entire studio space that filmmakers need to understand when designing shots in order to maximize VP’s strengths and mitigate its downsides. He observes that there are factors such as the type of cross lighting that works best in front of an LED wall and the relationship between the distance between actors and the wall, that can make an enormous difference in selling the illusion of VP.
Menendez notes that those involved in lens choice and the wall’s pixel pitch (measure of the density of pixels of the image on the wall) should include directors, cinematographers, gaffers, and more, all of whom should have a working knowledge of all these aspects of VP before embarking on a shoot.
“If it’s flat, you’re pretty good,” Brehm, who has served as DIT on shows using virtual production, says about LED walls. “If it starts curving, there’s some really goofy stuff that happens,” both with picture and with the sound reflections the curved walls can cause (a subsequent session, entirely about audio, would delve into such issues more deeply.)
Blevins touched on some of the specific jobs that have come about specifically because of VP. One high-pressure position he points to is the person handling the playback of the real-time engine (such as Unreal) used to generate the wall’s imagery.
The new job title is unique, he adds, “but it reminds me of when in the commercial world, a Flame operator would have a client right behind them. And it’d be very high tension.” This job, he surmises, would also suit someone who can be cool in that type of environment. The industry is, “finding people who are drawn to this kind of work and [who can] make a clear distinction between a reasonable and an unreasonable request without saying the word ‘no.’”
Weaver, speaking of the new VP supervisor position, explains “The VP supe really has to translate [all the VP-specific terminology] to everybody.” The person holding that title, he says, should be able to answer questions such as, “How does your role as a director change? How does your role as a DP change? How do you communicate [ideas] back to the VP supe?”
Long adds that for a smooth production process, communication must go beyond the set itself. People working in the production office, “need to expect what the pacing might be for equipment setup.”
He lays out some key questions for that group, including, “How do you pre-produce? What is the timeline leading up to the shoot?” Preparation, he sums up, will certainly take longer but it can save time in post. Producers, unit production managers and others not based on set need a working knowledge of these tradeoffs in order to make smart production decisions.
Virtual production, Rigney says, “is not an induvial job. Everyone has to play a role in this. It’s not some tool that [a few] people are going to operate. The whole production needs to participate in the process.”
The discussion is a fascinating primer about this exciting new field.
The rise of hyper-realistic synthetic media is expected to be accompanied by AI technologies that enable infinite versions of content to be generated and controlled by individual consumers, according to panelists at the Cannes Next conference.
Hovhannes Avoyan, co-founder and CEO of Picsart, suggests that the line between content consumption and creation will become very thin, with individuals able to control hyper-personalized experiences.
Avoyan believes that AI tools will not replace people but will enhance productivity, acting as a co-pilot to take on labor-intensive work and make creativity more affordable and accessible.
Anna Bulakh, head of ethics and partnerships at AI startup Respeecher, explained how they created a synthetic voice based on a person’s voice recordings.
The speakers emphasized their work as “synthetic media,” digital content created for the creative media industry, distinguishing it from deepfakes, which have more negative and possibly illegal implications.
The rise of hyper-realistic synthetic media may also be accompanied by AI technologies that enable infinite versions of content to be generated and controlled by individual consumers, according to panelists speaking at Cannes Next conference.
Speakers including Hovhannes Avoyan, co-founder and CEO of Picsart, suggest that the border between consumption and creation will end up being very thin, even nonexistent.
“If you really do come up with personalization of the content, it ultimately could mean that you can watch the same movie a different way that I watch the movie,” Avoyan said. “It means you will be in control of what you’re going to see. You can make your own movie and you don’t even need distribution hubs to get this. You can generate a movie on-the-fly and you don’t even need to go to Netflix to do it.”
Picsart is a suite of online photo and video editing applications. Avoyan said he believes that AI tools like these are not going to replace people but instead will enhance productivity.
“Think about AI is like a co-pilot,” he explained. “It’s a mentor, a helper, an assistant to get jobs done. It can take the most labor-intensive work and let you [get on] with the more fun, cool and creative part — making creativity more affordable and accessible. We can say [generative AI] is democratizing creativity.”
However, whether and how you use AI is up to us, he insists.
“I believe the competition will be between people who are using AI versus those who are not. And people using AI are going to be 10 times more productive, versus people that are not using AI tools. [those with AI skills] are going to be taking the jobs of other people.”
Also on the panel was Anna Bulakh, head of ethics and partnerships at Respeecher, an AI startup behind the voice of Darth Vader in recent editions of the Star Wars universe. She explained how Respeecher created a synthetic voice based on two to 30 minutes of a person’s voice recordings. Respeecher has a library of voices available for use on anything from audiobooks to ads with permissions.
“What it means that we preserve all emotions, intonations and language agnostic, so all languages are covered. It means that your vocal is part of biometric data,” Bulakh said.
The speakers were also keen to label their work as “synthetic media” — that is, digital content created for the creative media industry, as distinct from deepfakes, which have more negative and possibly illegal connotations.
How AI Is Already Democratizing Production and Post
TL;DR
A 2023 NAB Show session hosted by NAB Amplify talks up the benefits of AI as a tool to speed efficiencies in film production.
The introduction of AI into the craft stages of production prompts existential questions about the value of current heads of department.
AI is considered to be liberating as well as disruptive for all aspects of moving image storytelling.
Watch the full NAB Show 2023 session, “Generative AI: Coming Now to a Workflow Near You?”
Generative AI is changing not only the economics and logistics of film production, but also the entire creative process as a whole.
In an insightful discussion held at NAB Show with Pinar Seyhan Demirdag, the co-founder of Seyhan Lee, and Yves Bergquist, director of the AI and blockchain and media project at the Entertainment Technology Center at USC (the think tank funded by all of the major Hollywood studios and tech companies like Google, Microsoft and Amazon) consider the implications. Watch the full session, “Generative AI: Coming Now to a Workflow Near You?” in the video at the top of the page.
Demirdag kicked off the conversation by talking about Cuebric, the AI filmmaking tool developed by Seyhan Lee. “It is to my knowledge, the first virtual production tool that runs on generative AI,” she explains. “It’s a tool that enables filmmakers to go from concept to camera in minutes. Other services that our company offers involve include a generative AI VFX for films and advertising films.”
In developing Cuebric, Dermirdag explained that Seyhan Lee spotted several gaps in the current film and TV workflow that could use an AI boost:
“For example, on a film set, it feels always like everybody’s hurrying up… and then there’s always this constant waiting, everybody waits. Then, in the VFX process, there’s a bunch of repetitive tasks like rotoscoping. I don’t know how many of you rotoscoped in your life but if ever I offer you a solution to push a button, none of you will regret not rotoscoping.
“The third benefit of AI is a barrier with virtual production workflow — it’s a completely new technology and directors of photography especially are terrified of it. 3D real-time engines require an expert to operate them and it’s quite costly.”
There’s another problem that generative AI could help solve, too: the cost of reshoots. Seyhan Lee’s research found that reshoots, even on “healthy movies,” can cost anywhere from 5% to 20% of its budget. “For a medium-size movie, it costs the production $375,000. And for a big-budget movie, it costs up to $24 million for reshoots alone,” Demirdag added.
“How about we make a tool where we can save the production even a fraction of that and bring the mental health of the producers back?”
Bergquist explained that AI is making immediate inroads into enabling faster and cheaper pre-production. The use of generative AI for storyboarding is among the most accessible use cases for filmmakers.
“Down the road a lot of visual production is going to be extremely disrupted [by AI]. The Adobes of this world are already integrating AI into their software,” he said.
Bergquist predicted that many VFX companies will integrate generative AI into their pipelines. Not only would this make content faster, and cheaper, he said, “but it opens up a lot of opportunities for new creatives, riskier content and riskier content formats to be produced.”
Forecasting what might lie beyond automating workflows, he commented that AI would be liberating as well as disruptive for all aspects of moving image storytelling.
“The macro trend is giving individuals extremely powerful tools that used to be reserved for institutions. Information used to be in the monopoly of institutions, now in the hands of everybody with the internet. This is really putting the tools of large-scale high value production into the hands of everybody,” he said.
“We’re heading into a world where everyone including TikTok influencers and Instagram influencers and all sort micro content creators are going to have the tools to make a full anime series, for example. This is not something the entertainment industry is ready for.”
As a neat example, Dermirdag pointed out that that bullet time sequence developed painstakingly from research into photogrammetry by Paul Debevec in 1998, and employed in The Matrix a few years later, “was a megalithic invention” at the time but that now “you can basically have a cheap iPhone, and you get a similar effect as The Matrix.”
Bergquist continued the theme, theorizing that taking the tools of production out of the hands of large, expensive organizations and putting them in the hands of everybody means “we’re at the cusp of just an explosion of creativity. And that’s really, really exciting.”
However, both were in agreement that just having generative AI doesn’t mean it will produce a great story. “If you look at the history of independent cinema — most of independent cinema is just garbage,” Bergquist said.
“There are very, very few independent filmmakers who are very, very talented. So is AI going to take creativity to a completely new level of quality in general, in terms of how deep we reach into the human condition and tell stories that never been told before? Or is it just going to be painting over a lack of talent?”
Dermirdag dubbed this “the danger of normalization of mediocrity.”
The pair then held an interesting discussion about the merits and possibility of AI as a creative force in its own right. Dermirdag argued that much of the fear about using generative AI in the creative arts is because we don’t call it what it is: a tool.
“This tool is great, this tool doesn’t work. This tool will help me, this tool will ameliorate my workflow, this tool is going to help me make more money. This tool is complicated too to understand, but I’m going to read some books and understand it,” she said.
“Unfortunately, our collective subconsciousness is tainted by [negative] stories. 2001: A Space OdysseyRobocop, Terminator, Blade Runner, like we’re all entering forcefully, very fast into a zone that does not serve the elation of humankind. But it is our responsibility as every single human being to research into what AI does do. It’s actually quite simple. There’s a data set and then there’s an algorithm and it produces results in order to serve your creativity.”
Bergquist made the distinction between the craft of production and the decision making that leads to the craft. While AI could vastly improve the efficiency of production, it would allow for humans to make decisions about what to make for audiences.
“As all the crafting part of creativity becomes a lot more automated, a lot faster, a lot more optimized, my question is this. Does the quality of creativity lie in knowing about the craft, or does it just lie in the kind of creative decisions that you’re making?” Bergquist asked.
“Are there elements of knowing about the craft of being trained in the craft versus the decisions that are material to good creativity? How much of quality of how much of knowing about the craft of what aspects of Visual Arts just disappears. If production becomes just the push of a button what impact would this have on art or does telling the algorithm what to make pack all the creative decision making up front? Does knowing about the craft make you a better decision maker?”
These are great questions which will horrify editors and cinematographers and production designers.
Many, including cinematographers and directors, enjoy the creative discussion up front with a script before principal photography as the most inspiring part of the process. It’s where many believe the show is designed and created (editors would disagree), but surely all great filmed art is the outcome of a multiplicity of talented people and of the happenstance of coming together with technology to fulfill a vision.
Demirdag is in no doubt: “Generative AI has nothing to do with creativity. It has everything to do with being your parallel processing, never tiring, just your assistant constantly giving you options for you to curate, review, and select.”
You could probably program an AI to factor in artifacts and deviations from an original idea but would this produce, say, Flowers of the Killer Moon?
Hollywood’s Climate Story: A New Script for Sustainability
TL;DR
The 2023 NAB Show assembled a panel of Hollywood insiders driving investments in green business, amplifiers raising public awareness, and fighters on the front lines of reducing carbon emissions for “Changing the Climate in Hollywood.”
Moderated by Lydia Pilcher, film & TV producer-director-writer at Cine Mosaic, the panel featured Kimberly Burnick, director of sustainable production & content at NBCUniversal; Rachel Kropa, managing director of nonprofit + science at the FootPrint Coalition; and John Rego, vice president of sustainability at Sony Pictures Entertainment.
Panelists emphasized the importance of integrating climate narratives into storytelling, and the potential of the M&E industry to influence cultural norms around climate change.
The panelists also discussed the environmental impact of production processes, particularly the overuse of diesel generators, and the need for cleaner energy sources.
Watch the full NAB Show 2023 session, “Changing the Climate in Hollywood” above.
In the epic saga of climate change, Hollywood has a starring role to play. Not just in the stories it tells, but in the actions it takes. But is Hollywood doing enough to battle the climate crisis? From reducing reliance on diesel generators to incorporating climate themes into its narratives, the industry has a unique opportunity to lead by example and inspire a global audience to do the same. The question remains: How is Hollywood wielding its enormous power to change cultural norms around this existential global threat?
A recent session at the 2023 NAB Show tackled this critical issue head-on, assembling Hollywood insiders driving investments in green business, amplifiers raising public awareness, and fighters on the front lines of reducing carbon emissions to discuss their mandate to normalize climate action within the Media & Entertainment industry and propel the movement towards embracing sustainability.
The panel discussion, “Changing the Climate in Hollywood,” was moderated by Lydia Pilcher, film & TV producer-director-writer at Cine Mosaic, and featured Kimberly Burnick, director of sustainable production & content at NBCUniversal; Rachel Kropa, managing director of nonprofit + science at the FootPrint Coalition; and John Rego, vice president of sustainability at Sony Pictures Entertainment. Watch the full session in the video at the top of the page.
Emphasizing the importance of integrating climate narratives into storytelling and the potential of the industry to influence cultural norms around climate change, the panelists also discussed the environmental impact of production processes, particularly the overuse of diesel generators, and the need for cleaner energy sources.
“Climate storytelling,” however, may not mean what you think it does.
“When we use the term climate storytelling, people think that it’s a movie about climate catastrophe, or it’s a big climate-themed movie plot, which is not necessarily the case,” said Pilcher, who also co-chairs the Directors Guild’s sustainable future committee and helps lead a joint working group with the Writers and Producers Guilds on climate storytelling. “We’re talking about a spectrum,” she continued, “where climate can be a part of the story, even if it’s just a conversation between two actors, or a job that somebody has, or a backstory from a climate-related event.”
“We have built a program that essentially embeds sustainability into our existing creative processes,” Burnick said, recounting NBCU’s efforts to incorporate climate storytelling into the studio’s film and television projects with its GreenerLight program. Announced earlier this year, the initiative mandates that every greenlight package across the Universal Filmed Entertainment Group will include a sustainability plan, ensuring that sustainability is built into the planning process from the beginning including script development, locations, and set needs, as well as on-screen behaviors.
“The door has opened to start talking about what we are seeing on-screen,” Burnick said. “As a media company, our biggest impact is our audience,” she continued, pointing to behavioral shifts the studio has observed such as reductions in drunk driving incidents following designated driver storylines in a popular series. “There’s this realization that we also have a responsibility as storytellers to do the same thing around climate.”
Kropa touted new technologies that help monitor and regulate power usage. “The advent of the type of compute that we have now, it’s able to figure out a motor that can modulate on the spectrum of usage,” she said. “So maybe a diesel generator, you would be able to say ‘I only need this much power today to run this set of things,’ and the computer can decide to put it at that level. So it’s only using exactly what you need.”
The panelists stressed the importance of high-level support within studios and the role of process innovation in addition to technological innovation. They also addressed the challenge of portraying climate change in media without it coming across as propaganda, suggesting a focus on health and safety issues and making climate a normalized part of the conversation.
“Most innovation happens through incremental steps, and half of innovation is process innovation. It’s not actually the devices we see down there. It’s how things are applied,” Rego said. “One of the things that we’ve worked with for so long around how to introduce sustainable production, and how to introduce sustainability in anything our companies do, is also very much driven by process. And I think that really needs to be highlighted.”
Bringing in new technology, he underscored, won’t necessarily solve the environmental crisis. “It is teaching people and talking about behaviors and nudging people in the right way of putting something into place, like a power plan” that addresses power sources and how much power a given production actually needs.
Turning to practical applications, Kropa described various initiatives at FootPrint Coalition aimed at normalizing conversations and behaviors surrounding sustainability. “We have climate investors, climate entrepreneurs, people who make anything from clean beauty products to mushroom leather,” she said. “We also have a show coming out that does eco-modifications of classic cars. There’s a ton of things in there that are material changes, but also energy-type changes.”
How Productions Can Meet the Industry’s Climate Goals
Watch the full NAB Show 2023 session, “Hitting 2030 Sustainability Goals from Prep to Wrap” above.
TL;DR
The 2023 NAB Show assembled a panel of industry experts to discuss strategies for achieving the PGA’s ambitious call to action to reduce carbon emissions by 50% by the year 2030.
Moderated by the ICG’s production technology specialist and business representativeMichael Chambliss, panelists included Green Spark Group president & founder Zena Harris, MBS Group’s senior director of sustainability Amit Jain, DP Cynthia Pusheck, and Koerner Camera Systems rentals manager Sally Spaderna.
Fuel consumption, the panelists agreed, is the biggest carbon emitter for most productions, but even small, subtle changes can have a major impact.
As a significant driver of global culture, the media and entertainment industry is now confronting its own role in climate change, with sustainability emerging as a paramount concern across the sector. A recent panel at the 2023 NAB Show tackled this critical issue head-on, discussing the path towards achieving the Producers Guild of America’s ambitious call to action to reduce carbon emissions by 50% by the year 2030.
The panel discussion, entitled “Hitting 2030 Sustainability Goals from Prep to Wrap,” was presented by the International Cinematographers Guild IATSE Local 600 and the Production Equipment Rental Group (PERG), covering the steps that producers, department heads, suppliers and studios can take to meet the industry’s sustainability goals. The session was moderated by Michael Chambliss, ICG’s production technology specialist and business representative, and featured Green Spark Group president and founder Zena Harris; Amit Jain, senior director of sustainability at production service provider MBS Group; Cynthia Pusheck, ASC (Good Girls Revolt, Sacred Lies, Our Flag Means Death and Beacon 23), who serves as co-chair of the ASC Vision Committee; and Sally Spaderna, rentals manager at Portland, Oregon-based Koerner Camera Systems. Watch the full session in the video at the top of the page.
Fuel consumption, the panelists agreed, is the biggest carbon emitter for most productions, but even small, subtle changes can have a major impact.
Pusheck recounted her experience working with production sustainability consultant Green Spark Group. “It’s very top-down. We were lucky to have a producer who really cared about this,” she said, recalling how each department was tasked with finding ways to reduce fuel consumption. “Telling your camera team, like, ‘We can’t do rushes to get equipment, they’re gonna wait and do three or four runs.’ Every department was keyed into ‘How do you reduce fuel?’ from me asking for green car chargers to encouraging the crew to bike. It was great. But not every show is that committed.”
The panelists also emphasized the importance of calculating carbon emissions in the industry to understand where to focus sustainability efforts.
“A very first step is to understand what your carbon emissions are,” said Harris. “So if you’re not calculating your carbon emissions as an organization, as a production, you need to be doing that so that you can understand where to focus your efforts, so that you’re not spinning your wheels and doing a sustainable practice that you think is great [but] really doesn’t have an impact overall on your total carbon footprint.”
LED walls enabling virtual production have allowed productions to greatly reduce their carbon footprint, the panelists observed, but aren’t necessarily an ideal creative choice.
“Nothing beats shooting on location, but it is detrimental to the environment,” Spaderna said. “And as we’ve seen with what’s happening to the environment, you can get boned by the weather. You know, we’ve had productions shut down for weeks because of the smoke from forest fires, or extreme heat, or hurricanes and wind storms. And, unfortunately, it’s just getting worse.”
Requiring reporting on carbon emissions, Jain said, allows media companies to align their sustainability goals with on-set production practices. “A lot of the larger entities now require that reporting for their productions,” he noted. “So now they have baselines, they have targets, they have data to analyze and say, ‘Okay, well, how do we reduce our fuel use? How do we reduce our electricity?’”
Digital scouting is another solution productions are employing to reduce their carbon footprints. “It can be great,” said Pusheck. “And I think we’ve all learned what COVID ramped up was this ability of like, ‘Oh, we can all have meetings, we don’t all have to drive the spot or the location.’ Scout can go into video, and we can all watch it back here. We don’t all have to get in a van because we don’t all want to be in the same van right now.
The adoption of digital scouting has also led to other to other solutions for reducing travel emissions, she added. “Digital scouting, it’s fantastic. There’s just a lot of solutions that I think everyone’s opened up to. It’s like, ‘what does everybody really need to drive the studio for one meeting today?’ Or ‘Can we have a Zoom meeting?’ and stuff that wouldn’t have been accepted before, it’s now becoming normal.”
Spaderna emphasized the importance of driving a cultural shift in the industry by having ongoing conversations that cross departmental lines. “For instance,” she said, “Camera’s going to want to have their hot batteries in the morning. But they don’t have to leave the generators on all night charging those batteries, they’ll charge in 90 minutes. And then they can turn the generators off that are powering those banks of battery chargers, they’re done. So if someone could set a timer, once Camera’s gone home, and whoever is monitoring this, they can turn that off and they don’t have to stay on that charge. There just has to be an ongoing conversation where people are all engaged and all excited about driving this cultural shift.”
A new white paper from Interdigital benchmarks the carbon footprint of global video entertainment as higher than the airline industry.
May 29, 2023
Posted
May 29, 2023
TikTok x Brands x Consumers: What Content Producers Should Know
Watch the full NAB Show 2023 session, “TikTok x Brands: How to Effectively Engage Consumers” above.
TL;DR
TikTok Head of Creative Lab Kinney Edwards and Krystle Watler, head of creative agency partnerships in North America, used the NAB Show stage to educate and inspire brands and creative agencies about how best to work with the platform.
TikTok is actually a multi-generational platform, and the company has seen the strongest growth among users ages 35 and above.
Authenticity of content is important but so too is producing it in accordance with TikTok’s native tools and story structure.
TikTok has become the biggest user-generated content creation platform in the world and at the 2023 NAB Show, TikTok’s global head of creative lab Kinney Edwards and Krystle Watler, head of creative agency partnerships in North America for TikTok, took to the stage for a session titled “TikTok x Brands: How to Effectively Engage Consumers.” Their talk was designed educate producers, brands, agencies, marketers, and anyone interested in really diving to the platform to get the best creative out of it. Watch the full session in the video at the top of the page.
They sought to dispel some myths. For example, that the 18+ Gen Z audience is the predominant age group on the platform.
“TikTok is actually a multi-generational platform,” Edwards explained. “We’ve actually seen the strongest growth among 35+ users. Our most impactful rate of growth came during the pandemic and this is probably because we had a lot of cohabitation happening with Gen Zs returning home to their parents and people were looking for something to do to see the outside world.”
Contrary to some opinions, the TikTok algorithm does not favor content from well-known creators versus unknown creators. They cited a user who had just 70 followers until she posted a video of her skincare routine with one of her favorite products. The video went viral, racking up 43 million views, “not because she was popular, but because what she had to say in what she shared really resonated with people in an authentic way.”
The biggest myth, they claimed, however, is that TikTok is a social media platform.
“It is a next-generation entertainment platform,” they said. “This is all because TikTok operates on a content graph and not a social graph. On other social media platforms, it’s really about likes, and who you’re following in terms of delivery of content to you. The reason why TikTock has become the biggest social network in the world is because the algorithm is based on the interest graph, not on the social graph.”
Expanding on this, they said that if you are following people on Twitter or Facebook then you are following people as they change over time instead of being served content that you personally are interested in over time.
After the myth busting comes some “universal truths” for brands wanting to make the best use of audience engagement on the platform.
Harking back to the viral hit, the chief message is to create content that is authentic to TikTok — meaning ensuring that the content doesn’t just have an authentic voice, but is produced using TikTok tools and formats.
“The visual language of TikTok normally varies by different community [audience],” Watler shared. “But in summary, the most basic tips that we can give you in terms of how to go native is shoot on a mobile device, that’s all you need. You can change the settings so that you can shoot 4K to give you an even clearer camera capture and use native editing tools. Make sure that you’re thinking through how the content is going to interplay with the actual user interface.”
Editing techniques seem important to TikTok creators in order to entertain audiences. Edwards advised, “When you’re thinking about that story you want to tell and you’re thinking about the structure, we also want you to think about how you are transitioning to the different beats.”
And don’t forget the audio. “Every brand has been working on leveraging their sonic IDs. TikTok is a place where you can play around with those sonic IDs and run with it,” Edwards said, citing Microsoft’s Window’s chime turned into a song. “There are many ways that you can play with your sonic ID and your brand with sound on the platform,” he added.
“What’s really important with production on Tik Tok is to let the content be the star. Don’t overthink it. Don’t overdo it. TikTok is about authenticity, it’s about that glance into the imperfect, and the real. So let that drive your production.
“It doesn’t mean that TikTok can’t be beautifully crafted. We see a lot of luxury brands, using high end production technologies to create both on and off platform experiences. It’s up to you in terms of like what you need to actually tell the story.”
Put all that together, and brands might see a 25% increase in attention (they say) with 64% of users on the platform saying that they would be interested in buying a product that they see on TikTok.
The 2023 NAB Show session, “The Independent Age in the Creator Economy,” offered expert insights into brand partnerships and more.
May 24, 2023
Digital Worldbuilding: The Next Data-Driven Experiences
TL;DR
A 2023 NAB Show session explored how digital and physical experiences can be combined to create engaging and immersive storytelling experiences using mixed reality technologies.
MetaCities teamed up with StarBase to create one of the first digital twin hybrid live venue installations.
The startup focusing on recreating real-world locations in the metaverse is producing more than 50 virtual events a year.
Watch the full NAB Show 2023 session, “The Future of Data-Driven Hybrid Experiences: Bridging Digital and Physical Realms.”
Virtual and augmented reality will eventually merge the metaverse with our everyday surroundings, but we can get a glimpse of how that might look in emerging location-based experiences.
A number of them are popping up, notably in Las Vegas, where the mother of all venues, the MSG Sphere, is gearing up for an autumn launch.
It’s fair to say that mixing the 3D internet populated by avatars with tactile, IRL-populated experiences is both experiential and experimental.
We are in the foothills of what is possible and getting the mix right involves considerable trial and error.
“Probably one of the largest drivers of the technologies that we’re working with was COVID,” MetaCities co-founder Chris Crescitelli explained at the 2023 NAB Show. “The pandemic definitely accelerated the timeline.”
The NAB Show session, “The Future of Data-Driven Hybrid Experiences: Bridging Digital and Physical Realms,” explored how digital and physical experiences can be combined to create engaging and immersive storytelling experiences using mixed reality technologies. (You can watch the full session in the video at the top of the page.)
MetaCities is a startup focusing on recreating real-world locations in the metaverse, and then providing experiences in those virtual spaces that include musical performances, avatars and holograms.
It’s producing more than 50 virtual events a year, selling tickets and advertising/sponsorship alike on behalf of its clients, which includes Las Vegas-based StarBase.
StarBase is a 8,000-square-foot live and virtual entertainment event space that is among the first to use holoportation technology to operate as a hybrid real world and metaverse event venue. StarBase has a “digital twin” built by MetaCities using Microsoft’s AltspaceVR to replicate the physical venue as an interactive one online.
“MetaCities teamed up with StarBase to create what we think is the country’s first digital twin hybrid live venue installation,” Crescitelli said.
He described the core fundamental building blocks of the shows as using “proto-holograms in the real world.
“If you haven’t seen or heard of proto-hologram, it’s a seven by four-foot display in which people stand and their video is transmitted and displayed as a proto hologram anywhere else. Traditionally people beam from one proto hologram into the other. We’re using it for the live avatar projection as well,” he said.
“The second element is a robust metaverse platform, and the third element is the live tech in the building from camera installations and projection to sensors in the right places for the live audience to see each other. Technicians glue all that together.”
The success of these entertainments rely not just on technology but on utilizing the personal data of guests. Clearly there are issue of privacy but if those can be overcome then it is possible to curate experiences which are shared and personalized at the same time.
Melissa Desrameaux, venue director at StarBase, explained, “The way our space is laid out there are a lot of different rooms that guests can freely flow into. We encourage them to create micro experiences throughout the event. Not everyone has to have the same experience at the same time. A lot of times they’re asking for just really experiential ways to bring food and beverage to life and we’ll have fun creating different stations, building props for them.”
The ability to track guest eye movement would benefit the experience, but most people are not yet comfortable giving permission for this.
“Everybody in those upper executive offices would love to have all that eye tracking information. And it’s so logical to do that. But it definitely borders on privacy issues that some people don’t want to cross. But the tech is all there for sure. In a lot of virtual worlds that are headsets that are enabled with eye tracking and those will be more prevalent in the newer models.”
It is early days of course but there is potential for content owners to license their IP to appear in these virtual worlds and as avatars in the real world. Disney might seem the logical first mover though the panel thought owners with more flexible arguably less well known IP — such as Netflix’s Stranger Things — might be more suitable for exploitation.
“In the same way that you have SoundExchange for music (a collective rights management organization that collects and distributes digital performance royalties for sound recording) eventually we’ll have a similar exchange for images. So you’re using Mickey’s pants in this and Disney’s gonna take a piece of your sales…”
HOLOPLOT CEO Roman Sick and 7thSense Design Director of Marketing Eric Cantrell joined Deluxe SVP of Innovation Richard Welsh to explore and explain tech that enable the MSG Sphere and venues like it.
May 22, 2023
Headed to the Cloud: Virtual Live Remote Production and Video Distribution
Watch the full NAB Show 2023 session “Virtual Live Remote Production and Video Distribution” above.
TL;DR
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of NFL, NHL, and ESPN technology leaders to discuss the future of live remote production.
The session was moderated by AWS head of sports, global professional services, Julie Souza; panelists includedNFL Media senior director of asset management & post production Brad Boim, NHL SVP of technology Grant Nodine, and ESPN director of specialist operations Chris Strong.
The panelists shared their experiences and goals, providing a glimpse into the future of broadcasting that is not only more efficient and flexible, but also ripe with opportunities for innovative content creation.
In an era where cloud technology is revolutionizing the Media & Entertainment industry, the 2023 NAB Show brought together executives from the NFL, the NHL and ESPN to discuss the future of live remote production. The panelists shared their experiences and goals, providing a glimpse into the future of broadcasting that is not only more efficient and flexible, but also ripe with opportunities for innovative content creation.
The Q&A session, entitled “Virtual Live Remote Production and Video Distribution,” was moderated by Julie Souza, Head of Sports, Global Professional Services, AWS, and featured NFL Media senior director of asset management & post production Brad Boim, NHL SVP of technology Grant Nodine, and ESPN director of specialist operations Chris Strong. Watch the full session in the video at the top of the page.
The session focused on the transition from terrestrial production to virtualized live production, which is touted as being more efficient, flexible, and scalable. The panelists discussed their experiences with proofs of concept (POCs) in cloud production and the groundwork that needs to be laid before moving production to the cloud. They also discussed the challenges they faced, including the need for resilience, confidence in the technology, and the cost of maintaining equipment in a data center.
To prepare to move to live remote production, the most important thing, Nodine said, “is just building your orchestration and ability to get video into the cloud on a repeatable basis, and really be able to drive automation in setting that up so that you can really start to think about new ways to spin up occasional compute, to be able to really hone in on, on what you want to do with that video, and make it more multipurpose and make it so that you can really adapt it to any audience, and on any device at any time.”
POCs provide opportunities to demonstrate the efficiencies of remote production to leadership, the panelists agreed. “If you just keep working at it and establishing whether they’re POCs or actually being able to execute things live, you start to break the glass ceiling a little bit more because a lot of the folks up above in the senior management leadership roles are not maybe looking at it from the same lens that we are as technologists,” Boim commented.
“We can’t go off the air, we have to keep going all the time,” said Strong. “We actually did a previous POC in October where we shadowed a regular production that was already live so we could make sure that we had everything set up, that we were ready to go,” he recalled. “Then, at the second go-round, we knew we could do it, so we flipped that around and backed up our cloud instance with live.”
The panelists also discussed the potential of cloud technology to provide more flexibility and scalability, allowing for more unique experiences for viewers. They talked about the potential for creating different versions of broadcasts for different markets and the ability to adapt and try new things more easily with cloud technology.
Scalability, elasticity and, efficiency are the primary benefits of producing in the cloud, Strong said.
“Software vendors are becoming more cloud-first in their approach to developing new tools,” Boim added, noting that software updates are usually made available on a cloud-first basis. “Sometimes it’s easier to get access to [the latest tools] if you’re just spinning something up and you want to be able to try to use something without making a huge capital investment in your facility. So I think that is going to be a real benefit because you’re gonna be able to touch new things quicker because they’re much more accessible.”
However, they also highlighted the challenges that need to be overcome for wider adoption of virtualized live remote production, including cultural and training issues, the need for further development in the ecosystem, and concerns about resilience and reliability. They emphasized the need for a gradual, methodical approach to adopting this technology, with the industry working together with manufacturers and networks to ensure it is done properly.
“One of the biggest challenges is really convincing the organization that this is the direction that we should be taking, and that this is worth the effort that we’re going to make to build the orchestration and automation tools that allow us to be able to have all of these resources available in the cloud,” Nodine commented, noting that the industry is already very close to the point of “one feed to rule them all,” where orgs send one feed to the cloud that will satisfy all outputs. “We’re starting to see that unification happen,” he said, adding, “It’s really the cloud turning into a fog, right? Which is that basically the content gets closer and closer and closer to the end users to the point where it’s ubiquitous and available everywhere all the time. And we’re getting close to that.”
In the past, post-production and broadcast organizations would commission bespoke workflows that integrated applications and services for media professionals on the company network. These integrations, while tailored to the organization’s […]
May 20, 2023
How Everyone (Not Just Tom Hanks) Can Maintain Control of Their Digital IDs
TL;DR
Hollywood stars are having their digital likenesses copyrighted and managed by talent agencies, but we should all be able to take control of our own digital IDs, argues Metaphysic CEO Tom Graham.
AI is going to change content production profoundly, but may not necessarily force an exodus of jobs.
Generative AI will soon be able to capture our life experiences with the fidelity of reality itself.
Generative AI is going to profoundly change the way in which we create content, “because ultimately it is a hundred times cheaper than using 3D modeling and traditional VFX and setting up a camera,” said Tom Graham, the CEO of Metaphysic, in a keynote delivery at TED 2023 in Vancouver and in conversation with Jesus Diaz at Fast Company.
Evidence of this is happening today. Metaphysic, for instance, has developed an AI technology to capture the biometric AI profiles of any human to deepfake them in real time. The company signed an agreement with talent agency CAA to create AI-powered biometric models of its clients.
“We did a lot of de-aging of the characters because the movie covers their entire lifetimes. It’s both happening live on set while they’re actually acting, and then obviously it comes out in the movie and looks amazing,” Graham explains.
“Our partnership with CAA enables actors to own and control their data from the real world — their hyperreal identities [the biometric AI model made of photographic information captured in extremely high definition].”
Graham said it would mean that while you would still have to contract with the real Tom Hanks, “maybe Tom Hanks wouldn’t have to turn up on set to actually film.”
He said, “That’s definitely happening today, particularly in advertisements that involve sports figures, who have way less time to be in content than, say, actors. There are lots of applications in which we are beginning to decouple human performance from the physical locality and the time.”
All of this means that there’s a need to “empower” individuals to own and control their real-world data.
“We have agency over our bodies in the real world and our private spaces. People can’t come into our homes,” Graham said. “We need to extend that set of rights into a future that’s powered by generative AI. We need to democratize control over reality. Because if the means of production are controlled by big tech companies, then it’s the opposite of democratic norms and institutions that we experience today [in the physical world].”
For the record, Graham claims here that his own company has no interest in owning our data. “But we are the people who are definitely going to be pushing this discussion forward, trying to create tools and institutions to empower individuals.”
He also thinks that it’s only a short matter of time before generative AI is able to spit out hyper-realistic video content.
“I would say two years from now it will be super accessible and at the level of full video where you really struggle to tell the difference with reality. It’s a very short period of time for us to prepare ourselves both psychologically as individuals and as governments and regulators.”
However, while industry jobs will change Graham does not think there will be an imminent bloodbath.
“I honestly think that there’ll be more people hired to create the content of telling stories than there are today. What will be interesting, however, is how that works with unions and collective action.”
He added, “I think that the biggest category of job growth for the future of generative AI will be people who capture data from the real world and make that accessible to large AI models. If you think about what’s inside those models today, it’s not very good.
“We need to bring a thousandfold more data into those models to really be able to do stuff with the finesse that filmmakers want to do today. People who contribute to stock photography today will just migrate to contributing to these models in exactly the same business model.”
He ends the conversation with a prophesy that was previously imagined in “The Entire History of You” episode of Black Mirror.
“You can capture data from your experiences in the real world,” Graham said. “Maybe it’s your kid’s fifth birthday party. In the future, you can have that major event in your life in your catalog of life events, download it, render it out with AI, and fully relive that experience with exactly the same fidelity of the experience you lived the first time you were there. That’s a lot of what we’re talking about.”
The use of deepfake AI as entertaining “content” prompts questions about the ethics of a technology that is increasingly harder to detect.
May 22, 2023
Posted
May 17, 2023
From Cloud-First to Virtual Production: Amazon Studios’ Next-Generation Approach
Watch the full NAB Show 2023 session “Amazon Studios: Building a Next Generation Studio” above.
TL;DR
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of Amazon Studios execs to share their insights into constructing the next-generation studio.
In a session moderated by Jessica Fernandez, head of tech & security communications at Amazon Studios, panelists included head of technology workflow strategy Christina Aguilera, worldwide head of visual effects Chris Del Conte, and head of product strategy Eric Iverson.
From the adoption of a cloud-first approach to the extensive use of in-camera VFX, the panelists highlighted how Amazon Studios is redefining the entertainment studio model.
Espionage thriller “All the Old Knives,” and Amazon Prime series “Solos” and “The Lord of the Rings: The Rings of Power” served as key examples of Amazon’s next-generation studio approach.
Imagine having sunset for nine hours a day on a film set, or creating a bustling crowd scene in the midst of a pandemic. These aren’t scenes from a sci-fi movie, but real-life examples of how Amazon Studios is revolutionizing the production ecosystem.
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of key technology leaders from Amazon Studios to share their insights into the groundbreaking strategies and technologies they’re leveraging to build the next-generation production studio. The panel discussion, “Amazon Studios: Building a Next Generation Studio“ was moderated by Jessica Fernandez, head of technology & security communications at Amazon Studios, and featured head of technology workflow strategy Christina Aguilera, worldwide head of visual effects Chris Del Conte, and head of product strategy Eric Iverson.
From the adoption of a cloud-first approach to the extensive use of in-camera VFX, the panelists highlighted how Amazon Studios is redefining the entertainment studio model and leading the way in the new era of film and TV production. The discussion centered around Amazon Studios’ pioneering use of a fully AWS-powered cloud infrastructure, its innovative virtual production facility, Stage 15, and the company’s commitment to sustainability and diversity, equity and inclusion. Watch the full NAB Show session in the video at the top.
Espionage thriller All the Old Knives, starring Chris Pine and Thandie Newton, served as a prime example of Amazon’s next-generation studio approach. The lead characters in the film frequently meet up in an oceanfront restaurant at different times of day. The production team initially considered shooting on location, but since they were filming in London, capturing authentic sunset views was challenging.
To solve this problem, they turned to virtual production. They shot plates of a sunset and projected these onto an LED wall that was placed outside the window of the set on a stage. This allowed them to control the lighting and weather conditions, effectively giving them “sunset for nine hours a day,” said Del Conte. This approach also offered significant efficiencies and sustainability benefits, as they didn’t have to fly the crew out to a beach location or chase the sun to capture the perfect shot.
The use of virtual production was so successful that Variety, in its review of the film, complimented the beautiful sunset scenes, not realizing that they were digitally created, Del Conte recounted. “Variety got fooled,” he said. “So, at the end of the day, [virtual production is] the right kind of tool to be using for these kind of conditions. You don’t have magic hour, you have magic day.”
Aguilera highlighted the production of the Amazon Prime series The Lord of the Rings: The Rings of Power as her favorite example of the studio’s next-gen approach. She emphasized the studio’s proactive adoption of a cloud-first strategy, which ensured a seamless data flow from camera to final creative output. This strategy proved invaluable when the COVID-19 pandemic struck, allowing the production to continue unabated while much of the world came to a standstill. The experience underscored the critical importance of a cloud-first approach and system interoperability in today’s production ecosystem.
“The fact that we had the cloud-first approach, the interoperability between the systems, the data flow straight from the camera, all the way through final creative, and all of these concepts, you know, they took work,” she said. “But When COVID hit, [the production team] didn’t skip a beat. They didn’t have to stop. The rest of the world stopped. They kept going. So that was pretty amazing, the fact that we were able to be proactive and be in a position to keep moving forward.”
Amazon Prime anthology series Solos was another example of the studio’s innovative approach, Del Conte said, describing an episode featuring Helen Mirren. “The entire scene was her inside the space pod the entire time, all white interior bubble windows. She had a red reflective leather space suit on and she has whitish hair.”
The initial plan was to use green screen outside of the windows, but this approach would have resulted in green screen spill, changing the color of the pod interior and Mirren’s hair. The bubble windows themselves also presented challenges and, recognizing these issues, Del Conte proposed virtual production as the solution.
He recalled a gratifying moment towards the end of the shoot when a member of the post-production team thanked him for his suggestion, saying, “Not only did we save time and money, but we’re also able to start testing this episode in two weeks,” as opposed going through iterative processes of VFX shots and management.
Del Conte emphasized that this approach was not only more efficient and cost-effective but also resulted in in-camera final effects shots ready for testing. “This was really the only way to do this kind of shoot, and [resulted in] a better creative experience.”
German camera manufacturer ARRI launches ARRI Solutions Group, offering of holistic solutions for the global film and television industry.
May 15, 2023
Navigating the Creator Economy: A Deep Dive with Industry Experts at the 2023 NAB Show
Watch the full NAB Show 2023 session “The Independent Age in the Creator Economy.”
TL;DR
As part of the CAPITALIZE Inspiration Theater programming, the 2023 NAB Show rounded up a panel of experts to share their insights into the creator economy.
Moderated by Nick Urbom, founder and CEO of the social platform Nowbase, panelists included YouTube content creators Adrienne Finch and Lauren Lipman and Marinda Yelverton, senior vice president of brand solutions, North America, at global creator commerce company Whalar.
The panel highlighted the power of audience engagement, the significance of brand partnerships, and the evolution of tools empowering creators to carve out their own economic independence.
As the creator economy surges forward, transforming the Media & Entertainment industry, the 2023 NAB Show served as a platform for an exploration of this rapidly evolving landscape. The panel discussion, “The Independent Age in the Creator Economy,” held on Sunday, April 16, 2023 in the CAPITALIZE Inspiration Theater, explored the dynamics of this burgeoning economy, now representing more than 50 million creators and a marketplace exceeding $104 billion. Watch the full NAB Show session in the video at the top.
The discussion, led by Nick Urbom, founder and CEO of the social platform Nowbase, offered a deep dive into the challenges and opportunities within this new economic paradigm. As creators navigate shifting revenue models, new government regulations like The EU Digital Markets Act, and the legacy of ad-driven social media models, the panelists — YouTube content creators Adrienne Finch and Lauren Lipman and Marinda Yelverton, senior vice president of brand solutions, North America, at global creator commerce company Whalar — highlighted the power of audience engagement, the significance of brand partnerships, and the evolution of tools empowering creators to carve out their own economic independence.
Discussing the importance of storytelling and human connection in marketing and advertising, each of the panelists emphasized how consumers connect more with stories and people than they relate to traditional advertisements.
“There’s just a lot of people out there wanting to connect and wanting to just be involved and put all in when they believe in a creator,” Lipman said.
“Consumers truly have the power here and they look to creators for relatability, aspiration. They don’t want to be told to buy a product. They want to really buy into a product because that brand really fits with their core values, their purpose, all those things,” said Yelverton.
Yelverton endorsed the subscription business model as “not only the most lucrative, but also the most connecting,” she said. “It’s a true-two way conversation. Because guess what? If [a creator] doesn’t have time to even talk to everyone every single day, they have each other and they are like-minded individuals who are looking for the same thing.”
The importance of authenticity in content creation was another big topic. Authenticity in content creation is about being genuine and consistent, allowing audiences to connect on a personal level. It’s more than honesty; it’s about staying true to one’s values and style, fostering trust and loyalty among audiences. In an increasingly saturated digital landscape, the panelists agreed, authenticity stands out, serving as a key tool for engagement and influence.
“Brand collaborations are the definition of a partnership,” said Finch. “I will say not every creator is on the same page when it comes to that. A lot of people have different intentions or want to speak about different brands for different reasons. For us, I know it’s all about authenticity and genuinely working together to create a win-win situation.”
Lipman cautioned against endorsing products that don’t actually serve your community. “If we see a brand and we’re working with them and the contract is going but the product doesn’t work — I’m not going to promote makeup that will give someone a rash,” she explained. “If a couple of people were to purchase a product that I promote and it sucks, breaks, is bad, that is my authenticity, my credibility. Why would anyone else ever want to buy anything that I promote ever again?”
Maintaining authenticity while working under the pressure to create more content is one of the biggest challenges creators face, Yelverton said. “It isn’t necessarily a bad thing, the pressure to produce content and creativity, [but] I think it’s a challenge.”
Platform evolution is another challenge to maintaining authenticity, Yelverton added. “So, how do creators maintain that authenticity while they try and explore as well? And being able to test and learn yourself and then see what resonates. I think that’s going to be a consistent challenge as these platforms evolve and change their mission. One minute it’s social commerce, the next it’s short form video, you know? And being able to stay true to yourself and your audience follows you along the way.”
Another evolving factor is the shift in perception of creators over time, with brands now recognizing their influence and potential for marketing and brand representation.
“The brands have obviously jumped into this space and figuring out different ways to align with creators,” Yelverton said. “For those of us who pay attention to this space in terms of what’s going on, programmatic advertising and things that we might see on matter, the Facebook platforms, Instagram, etc., where you can really target audiences with advertising. What we’re seeing is a huge uptrend in brand partnerships.”
Influencer marketing is a “win-win-win-win situation,” says Finch. “And here’s why. Because as a company, you’re actually paying far less than you would for a traditional ad. The whole entire production of a commercial talent ad space marketing for everything, you’re paying way less. This is a one-woman show over here…. So they’re paying way less. And guess what? For me, that way less is a lot because it’s just me. So I’m getting paid more. They’re spending less. Not only that, but they have absolute in real time metrics of demographics, feedback, comments, likes, dislikes, exactly how it’s doing. So I think companies started really latching on to, okay, this is very targeted and we can target and we can find creators that we trust and have credibility.”
Watch the full NAB Show 2023 session “Lights, Camera, Action: Building the DIY Production Studio.”
TL;DR
As part of its CREATE Experiential Zone, the 2023 NAB Show rounded up a panel of experts to share their insights into constructing the ultimate studio setup.
Moderated by “Inside the Creator Economy” editor and publisher Jim Louderback, panelists included John Canning, AMD’s director of developer relations for creators; Nelco Media president Philip Nelson; Renee Teeley, creator economy advisor and host of “The Creator Feed” podcast; and Evans Media Group founder Ivan Zeljkovic.
Workshop participants learned about the key differences between a creator studio and a traditional TV studio, particularly in terms of the balance between capability and cost, along with the need to scale as a creator’s channel grows.
The panel also addressed solutions for lighting, audio and set design, as well as camera formats, and the choice to use live or edited video.
If you’re a content creator, you know how important it is to produce high-quality content that captures your audience’s attention. However, as a creator, you also know that this can be a costly endeavor. As part of its CREATE Experiential Zone, the 2023 NAB Show rounded up a panel of experts to share their insights into constructing the ultimate studio setup.
Moderated by Jim Louderback, editor and publisher of the Inside the Creator Economy newsletter, this hands-on workshop, “Lights, Camera, Action: Building the DIY Production Studio,” focused on the art of creating and producing content using a range cost-effective technology solutions. The session included John Canning, Director of Developer Relations for Creators at AMD; Philip Nelson, President of Nelco Media; Renee Teeley, creator economy advisor and host of The Creator Feed podcast; and Ivan Zeljkovic, founder of Evans Media Group. Watch the full NAB Show session in the video at the top.
During the workshop, participants learned about the key differences between a creator studio and a traditional TV studio, particularly in terms of the balance between capability and cost. “The difference is blurring quickly,” said Nelson, noting that with the increasing democratization of professional-grade production tools it really comes down to scale.
The biggest difference, said Teeley, “is you are in front of and behind the camera at the same time,” but she cautions that there are actually many different types of creators.
“When I was talking about going to the independent creator and building out a studio, that’s very different than if you think about like the Mr. Beasts of the world that have huge teams around them,” she continued. “And I think their setup is probably a little bit more similar to what you would get with traditional TV studios in some respects, because they have a larger team.”
The panel also addressed the question of whether — or when — to use live footage versus pre-recorded and edited material. “The great thing about live from my perspective is that you do live and you’re finished. You know, we do a lot of studios for government public affairs, and usually they’re understaffed. It’s usually one on camera person and maybe the PIO, the public affairs officer and a video person,” Nelson said.
“And so one of the things that we really push is that live workflow,” he explained. “You know, I’m going to just record it. It’s an hour discussion. At the end of that hour, I’m finished and I’m moving on to the 50 other things I need to do instead of taping a session and now going into post and spending a day or two editing it.”
Live workflows utilizing this one-and-done approach can still be fraught, Teeley warned. “Just ask Netflix,” she said. “They just went through a situation where they were trying to do you live version of Love is Blind and it never aired because they had technical issues. Live can be very challenging you have to fix things in the moment.”
Video formats are another big consideration for content creators. Social media platforms, said Zeljkovic, “pushed us to the vertical, so they pushed us to do real studio posting.”
“If you are creating for various different types of aspect ratios, try to shoot at least 4K so that when you turn it down you’re not you don’t have to crop anything that you wouldn’t want cropped,” Teeley advised. “So trying to if you’re shooting horizontally, trying to do it in 4K and bring that down to a vertical video, but also pay attention to what’s behind you.”
Audio, the panelists agreed, is one of the most crucial aspects of content creation, and can make or break a production. “Having good audio is the key,” said Canning. “You can have bad video and good audio and still have a good production. If you have bad audio and good video, you’re pooched.”
In addition to hands-on guidance on lighting solutions for beginner creators, the workshop also provided tips for creating a set and selecting appropriate backgrounds. “Don’t be stuck to the wall,” Zeljkovic offered. “Try to go as far as you can out of the wall. Please. Just don’t have shadows on the wall. That’s it. I mean, that’s is the one thing that you should not have. And if you can achieve depth of field, that would be perfect.”
The workshop also emphasized the importance of selecting products and infrastructure that can scale as a creator’s channel grows. “As you transition, it’s your workflow” that needs to scale, said Nelson. “You know, people get into the habit of just doing it all themselves. And in order to scale, you got to design a good workflow for your content. You’re producing the show, your guests, if you have guests, and in in letting your team do their jobs, because as a creator, you’re like, ‘I can do this.’”
Meet three young creators. This conversation, moderated by Jim Louderback, will preview NAB Show’s Creator Squared workshops and panels.
May 8, 2023
Building the Next-Gen Studio: Why Sustainability in Media Matters
Watch the full NAB Show 2023 session “Why Sustainability In Media Matters.”
TL;DR
The 2023 NAB Show put a spotlight on the M&E industry’s efforts to rise to the challenge of climate change with the inaugural Excellence in Sustainability Awards and Main Stage panel discussion, “Why Sustainability In Media Matters.”
Moderated by Kibo121 CEO Barbara H. Lange, panelists included Creative Visions president and CEO Pat Chandler, Accedo VP of Strategy and Business Development Bleuenn Le Goffic, Susan Sanchez from the Senior Sustainability Program at Amazon Studios, and MediaMonks SVP of Innovation Lewis Smithingham.
The pandemic helped drive many of the sustainable practices that have become common in today’s production ecosystem.
The role of COVID coordinator could evolve to that of IT sustainability coordinator in the next-gen studio of the near future.
As the most recent UN report warns that the planet is in real danger of missing critical targets to reverse climate change, sustainability has become one of the biggest touchpoints across media and entertainment. This is evidenced by initiatives such as the Greening of Streaming industry effort, BAFTA’s albert, and the PGA’s Green Production Guide to certify green productions, but these programs are merely the tip of the literal iceberg already melting into the sea.
The 2023 NAB Show put a spotlight on the M&E industry’s efforts to rise to the challenge of climate change with the inaugural Excellence in Sustainability Awards recognizing individuals, companies and products/services for media technology innovations that promote conservation and reusability of natural resources and foster economic and social development. The Main Stage awards ceremony was preceded by a panel discussion, “Why Sustainability In Media Matters,” to examine the state of sustainability in media today, and what it means for the future. Watch the full NAB Show session in the video at the top.
Moderated by Barbara H. Lange, Principal and CEO of sustainability consultancy Kibo121, the panel included Pat Chandler, President and CEO of Creative Visions, the nonprofit organization selected as a recipient of the proceeds from this year’s awards nomination process; Bleuenn Le Goffic, VP of Strategy and Business Development at Accedo; Susan Sanchez from the Senior Sustainability Program at Amazon Studios; and Lewis Smithingham, SVP of Innovation at MediaMonks.
Smithington recounted how production practices have slimmed down, reducing travel and reliance on big broadcast tracks running on generators for days “down to something that takes less power than a refrigerator.” he said. “We’ve heard that just reducing the flights reduces our carbon footprint by, like, 83%.”
Single-use plastics and set builds are two areas Amazon Studios is focusing on making more sustainable, Sanchez said. “You have a crew of 200 people on set for eight weeks. That’s a lot of single use plastics. So, we are facing that coming out of COVID and also set, build, construct to deconstruct and reconstruct. So reuse should be first and recycling should be last, right? And disposal hopefully not [at all].”
Le Goffic, pointing to use cases for merging resources on live productions such as the recent BBC/ITV collaboration during the World Cup, asked if this could become a trend. “We’ve seen a little bit of that particularly around virtual stages where instead of having, like, eight single stages for a single event, you have a single LED stage and the set just changes over and over and over again,” Sanchez noted.
Reuse of shared visual assets and engineering resources, said Le Goffic, are both key to developing sustainable practices. “Having common resources that we can use for exactly what you need instead of having this constant battles or, you know, developing your own IP, having something that you built yourself, I think there is a transition that we’re seeing.”
From a creator’s perspective, said Chandler, it’s vital to examine the impact a production will have. “Part of what we do is from concept to, you know, the story and distribution, but we also help them with their impact campaigns and looking at how they’re going to go about building the teams and telling those stories.”
The pandemic in particular helped shape a number of sustainable practices, Sanchez noted. “There was a lot that went on during the pandemic that was a positive for sustainability in production,” she said. “We work very collaboratively with the other studios and the organization called the Sustainable Production Alliance, borne out of the PGA and working with the Green Production Guide and really bringing that up to speed to be a more global guide.”
Collaboration on tools and even education also increased during the pandemic, Sanchez added. “There was a lot of education that started happening on set that was really blurring the lines. So COVID coordinator guys were naturally seeing things on set that they said, ‘You know what, I’m going to take care of this.’ And so there was a lot of collaborative work that hadn’t happened before.”
Lange asked if the role of IT sustainability coordinator might eventually replace the COVID coordinator. “I think there’s a lot of cross-training,” Sanchez said. “There’s also a lot of upskilling. I don’t think there’s replacement of jobs. I think we’re looking at a whole new world, kind of a next-generation studio.”
A new white paper from Interdigital benchmarks the carbon footprint of global video entertainment as higher than the airline industry.
May 7, 2023
Will Video Games Give Hollywood an “Extra Life”?
TL;DR
The success of HBO’s “The Last of Us” has streamers rapidly expanding their game-related series, with game IP providing culturally relevant content that can relatively easily adapted.
While it seems likely that video games could be the next frontier for box office dominating big-budget adaptations, the medium has plenty of quirks that will make the adaptation process difficult.
At least 60 game-to-screen adaptations are currently in various stages of development.
HBO’s The Last of Us has been hailed as the first good video game adaptation. After years of trying has Hollywood finally got the formula right — or is it just that video games developers have become better at storytelling than the studios?
Perhaps neither, but one thing everyone seems to agree upon is that The Last Of Us provides a benchmark for the rapidly number of game-related feature films and series coming our way.
Global video game adaptations soared by 47% from 2021 to 2022, according to analyst firm Omdia, and streamers are now increasing investments in bringing games to screens as high-end live action series.
“Streaming services and studios need more content to monetize their services and reach profitability,” Maria Rua Aguete, chief media analyst at Omdia, tells Richard Middleton at Television Business International. “Dedicated fan bases across IP such as games, books and podcasts are becoming increasingly valuable.”
In some quarters this business concept is being called “transmedia storytelling.” It’s not a new idea, as studios have consistently done their level best to wring as much IP out of a successful franchise as they can by spinning features into TV shows and video games and all sorts of other media merch.
In addition, game studios were already looking to Hollywood to spread their stories and have struck development deals with streamers. “Now they have an ideal to aspire to,” points out Will Bedingfield at Wired.
Broadly speaking, if a story is extended, it’s transmedia, so the third episode of The Last of Us, which explores the love between minor characters Bill and Frank, counts; other episodes constitute straightforward adaptation.
“Playing The Last of Us, few people thought of Bill as much more than a trap-setting maniac; watching The Last of Us, they saw him in a different light,” says Bedingfield. “The game’s universe grew deeper.”
To the game studios, it seems obvious why the TV sector is taking increased notice in the content and experiences they are creating.
“AAA video games are very close to TV series (not movies) in terms of creating complex narratives and building strong relationships between in-game characters and gamers/audience, which is a great foundation to build on,” Bartosz Sztybor, comic book and animation narrative director at Polish video game developer CD Projekt Red, tells TBI.
The size of the gaming market — two-thirds of US consumers are gamers across mobile, PC and console platforms — also provides huge cross-selling potential, meaning more adaptations can be expected.
Adaptations are also in the rise because gaming IP tends “to lean into the political and social zeitgeist,” the Omdia analyst adds, citing the fresh approach to LGBTQ+ representation and multigenerational lead characters in The Last of Us.
Yet the formula that made the HBO drama a hit could simply be the “genius” pairing of showrunner Craig Mazin with The Last of Us game creator Neil Druckmann. As HBO Max chief content officer Casey Bloys tells TBI, “there is nothing particular about video games that make them better or worse to develop,” and stresses that he was drawn to the project due to having had “a very good experience” with Mazin on his drama Chernobyl.
Helene Juguet, who runs the French division of Ubisoft Film & Television, says original creators should not necessarily serve as a showrunner or writer, “because those are two very different types of expertise.”
She does suggest, however, that it is “absolutely necessary for the show’s creative team to deep dive into the world of the game they are adapting” so they can understand what the creator was aiming to achieve and connect with “what makes the fans tick.”
“At the same time that video game adaptations have been waxing, superhero movies have been waning,” says Andrew King of The Gamer, who has crunched the numbers. Shazam: The Fury of the Gods opened with a $65 million global weekend, $20 million under the studio’s lowest estimations and was just the latest in a line of superhero movies that have underperformed.
Though Spider-Man: No Way Home was a massive hit, the MCU has now had two flops in close succession with 2021’s Eternals and last month’s Ant-Man and the Wasp: Quantumania, while none of its TV shows have lit up water coolers like The Last of Us.
With the growing sense of superhero fatigue, there is fair speculation for video game adaptions to take their place.
“In a world where studios seem primarily interested in proven properties with an excited fan base already willing to pay for admission, it’s not hard to imagine video game adaptations becoming the new superhero genre,” says Devin Baird, writing at MovieWeb.
Studios may be tempted to leap onto games as the next big multi-universe franchise, but the same approach that worked with the MCU will be harder with a video game as the source material.
“Comics have always married graphic art and written stories, but games have often treated stories as an afterthought, a connective tissue added after the fact to make the transitions between levels and set pieces run more smoothly,” says King. “Hollywood execs quickly run into the inconvenient reality that games don’t tend to provide much story justification for why characters from different series are suddenly in the same world together.”
Plus, each games hardware manufacturer has its own exclusive series but, unlike with DC and Marvel, just because Microsoft owns Master Chief and Marcus Fenix, that doesn’t mean they exist in the same universe.
“The upside of this is that audiences likely won’t have to keep up with a bunch of interconnected movies and TV shows just to understand the next big thing. Mario isn’t going to end on a tease for The Last of Us season 2.”
The reality could also be, as an article in Time magazine points out, that video games have become so cinematic in their scope, storytelling and visuals that they’ve superseded some films in terms of their ambition and emotional resonance.
“The Last of Us stayed close to its source material exactly because the original game was designed, essentially, as interactive cinema with all the twists and heartbreaks one might expect from a prestige project.”
So, while it seems likely that video games could be the next frontier for box office-dominating big-budget adaptations, the medium has plenty of quirks that will make the adaptation process difficult.
There are currently upwards of 60 game-based productions in development and analyst firms like Newzoo are already reporting that game IP is climbing in value “as transmedia becomes more relevant.”
Time lists a number of game-to-screen adaptations, including Gran Turismo starring Orlando Bloom, which was released in August; a Tomb Raider reboot, which Fleabag writer and star Phoebe Waller-Bridge is putting together for Amazon Prime; Bioshock, a post-apocalyptic thriller with Hunger Games director Francis Lawrence attached for Netflix; a series based on Metal Gear Solid with Oscar Isaac attached; and Borderlands, a feature with Cate Blanchett, Jack Black, and Jamie Lee Curtis adapted by TLOU’s Craig Mazin.
This might just be the start of a new wave of video game movies that rule movie theaters and streaming content to become the new dominant media.
At NAB Show, “Last of Us” producer Craig Mazin discusses the “luck” involved in assembling the right team for the show’s production and post.
May 7, 2023
AI-Generated Commercials May Be Cursed Today, But Give the Tech a Minute
Tools used: GPT4 for the script, Midjourney for the imagery, Runway Gen2 for the video
TL;DR
The first AI-generated ads and official AI-generated political campaign video herald dangers for which we are ill prepared, according to experts.
Once people grow accustomed to fake AI-generated videos, they’ll become even more hardened, cynical and harder to reach and convince.
The time when a deepfake video that can’t be easily discerned from real events plays a major role in campaigns seems not only inevitable but closer than ever.
Ads for insurance or a car are in many ways the same as political campaign spots, but when the truth of what they are selling can be always doubted then how can consumers or voters know what they are buying into?
Artificial intelligence is going to make fools of us all and it’s a danger for which we are not prepared, according to experts.
In the week that saw one of the foundational figures of modern AI, Geoffrey Hinton, leave Google to decry the pace of change without checks and balances, the first AI-generated commercials were released.
Two of them are experiments — proof of what’s possible, with occasionally hilarious results — but the third has more profound implications.
“‘Synthetic Summer’ is a machine-learning interpretation of an American beer advert,” said Chris Boyle, co-founder of London-based Private Island, which generated a video ad from text prompts.
“We’ve been using Stable Diffusion, Control Net and Runway to understand new forms of moving and generative image for the last 12 months — exploring new ways of working and new mediums of visuals powered by Machine Learning,” Boyle told Roland Ellison at Interesting Engineering.
Another video creator named PizzaLater also made a spot for a pizza chain using an array of AI tools. He explained to Jamie Madge at Shots that he did it just for fun.
“I asked GPT-4 to “write me a silly script for a pizza restaurant commercial using broken English.” It generated three scripts in total, and I picked my favorite parts to assemble the final script used in the video.”
He also asked the AI for 10-15 “meme-worthy names for a pizza restaurant,” and “Pepperoni Hug Spot” was chosen. In MidJourney, PizzaLater generated images of the restaurant’s exterior and some pizza backgrounds and then used Runway Gen2 to create the spot simply by requesting “a happy man/woman/family eating a slice of pizza in a restaurant, tv commercial.”
Eleven Labs’ “Voicelab” delivered a few different voiceover reads of the script, allowing him to piece together the best takes. SOUNDRAW provided some appropriate background music.
It took all of three hours.
“I believe things will really go off the rails when text-to-video becomes closer to photorealism,” he said.
Of course, it won’t be long before this is possible. Perhaps as soon as 18 months away — the date of the 2024 Presidential election.
It was inevitable that politicians would seize on the technology. Turns out that the Republican National Committee got in there first, in the US at least.
Its AI-generated ad, released last week, depicts a hypothetical future where President Biden is reelected, banks collapse, China invades Taiwan, and San Francisco is cordoned off by the military after being overrun by immigrants, gangs and drugs.
“It’s more notable for being the first of its kind than for the quality of the ad itself,” says Cameron Joseph at VICE. “President Biden and Vice President Kamala Harris clearly have that toothy, wolfish look common in AI-generated art. Most of the ad’s footage could have just as easily been replaced by b-roll video from actual events. And to their credit, the RNC explicitly released the ad as an AI-generated video, and labels it as such on the video itself.”
Campaign strategists in both parties told VICE that they doubt that major candidates and the party committees will be willing to risk, including fake, AI-generated videos in major TV campaign ads. The chance of being called out for lying and losing trust with voters simply isn’t worth it. They think that AI technology will be most often used for more mundane tasks like writing simple campaign ad scripts, and press releases.
But the video, even if it is a gimmick, marks the beginning of a new era where it could become even harder for voters to discern truth from lies.
“The concern is when we get to the point where it can be done down at the grassroots level. There are tools that are out there where they could generate this stuff en masse in an automated way,” Dave Doermann, the director of University at Buffalo Artificial Intelligence Institute, tells Joseph.
“We’re not going to be able to detect it in real time fast enough that it makes any difference, [and] even if we could, the social media sites aren’t going to be the ones that are putting the effort into taking it down.”
Even if deepfakes don’t become ubiquitous before the 2024 election, still 18 months away, the mere fact that this kind of content can be created could affect the election. Knowing that fraudulent images, audio, and video can be created relatively easily could make people distrust the legitimate material they come across.
“In some respects, deepfakes and generative AI don’t even need to be involved in the election for them to still cause disruption, because now the well has been poisoned with this idea that anything could be fake,” independent AI expert Henry Ajder tells Thor Benson at Wired. “That provides a really useful excuse if something inconvenient comes out featuring you. You can dismiss it as fake.”
Democratic ad maker Jon Vogel agrees, telling VICE: “The biggest hurdle to effective political advertising is credibility. With AI technology becoming more common in our lives, voter skepticism will only continue to grow. This increases the burden on political media firms to find additional ways to make the ads credible.”
Also in the Wired article, Hany Farid, a professor at UC Berkeley’s School of Information, calls out the companies making generative AI tools like Runway, Google and Meta for playing with a runaway train.
Farid says that nobody wants to get “left behind,” so these companies tend to just release what they have as soon as they can.
“It consistently amazes me that in the physical world, when we release products there are really stringent guidelines,” Farid says. “You can’t release a product and hope it doesn’t kill your customer. But with software, we’re like, ‘This doesn’t really work, but let’s see what happens when we release it to billions of people.’”
The time when a deepfake videos which can’t be easily discerned from real events play a major role in campaigns seems not only inevitable but closer than ever.
While much of the potential impact of AI on entertainment is still hypothetical, creative guilds are taking steps to protect their members.
May 6, 2023
Gary Vaynerchuk: AI Is Coming for You. Get Over It. Get Used to It.
TL;DR
Get curious and learn to adapt, or you dwell in the past and get left behind. It’s your choice, but AI is coming either way, argues entrepreneur Gary Vaynerchuk
Vaynerchuk offers a capitalist’s perspective on how algorithms might take over the labor force.
The current chairman of VaynerX, CEO of VaynerMedia and CEO of VeeFriends says “if you’re not using AI in your daily life, you’re making a huge mistake.”
What’s all the angst about AI? asks Gary Vaynerchuk. Just live with it.
As a successful businessman currently serving as chairman of VaynerX, CEO of VaynerMedia and CEO of VeeFriends, let’s just say Vaynerchuk has a capitalist’s perspective on how algorithms might take over the labor force. You could view what he says as deeply patronizing or a much-needed wake-up call.
“If you’re not using AI in your daily life, you’re making a huge mistake!!!” he writes on his personal blog. “If you’re so anti-technology at this point, imagine a life where you can’t order food or use phones.”
He even harks back to the luddites of the 19th century who broke farm machinery and textile machines that threatened their livelihoods.
“The fear around modern technology is as old as time. Farmers saw it as a threat and thought it would burn the world but what it did was make the human life easy and created jobs around it. Fearing AI coming for our jobs is just like one of those things. Always remember, when jobs are taken away, new jobs are created.”
Instead, Vaynerchuk urges anyone who will listen to start learning to work and play with AI tools today. He says he believes that human ideation and creativity will still be needed in order to produce the best from AI.
“Creativity is about adding, subtracting, and making things make sense… and ChatGPT is no different with that. With AI, the commodity is being destructed but you still need to come up with ideas. What most of you are missing in the context of worrying about AI is that you still have to feed it with creativity.”
In a related post on TikTok, Vaynerchuk argues that the heated debate around AI is the same “hysteria” that happens any time there’s a major technology advancement.
“The reality is, sh*t changes. You either stay curious and taste things and learn to adapt, or you dwell on why you “wish” things would stay the same and get left behind. It’s your choice, but AI is coming either way.”
This is the same hysteria that happens any time there's a major technology advancement.. the reality is, sh*t changes. You either stay curious and taste things and learn to adapt, or you dwell on why you "wish" things would stay the same and get left behind. It's your choice, but AI is coming either way
Executives at the 2023 South by Southwest conference urge users to consider AI tools as helpers for human activities such as brainstorming.
May 2, 2023
Posted
May 2, 2023
Dreams and (Extended) Realities: The Future of Immersive Media
Watch the full NAB Show session above
A panel discussion at NAB Show featuring Jake Zim of SPE and Aaron Grosky of Dreamscape explores the current state of and future for immersive media
Video games, realtime virtual production, VR, AR and extended reality (XR) are combining to revolutionize immersive storytelling experiences in ways that studio heads are actively imagineering.
Jake Zim, SVP, Virtual Reality, Sony Pictures Entertainment said, “VR is immersive and isolating in many ways. It’s intense. Most people’s VR sessions are between 20 and 40 minutes. That’s very different than watching a long-form piece of content. So, every slice of the spectrum of storytelling aligns with the different technology.”
His concern is, “How do the different touchpoints all both connect with and stay authentic to the original brand and the vision holder of the original brand, but yet offer the audience something unique that the technology or platform could deliver?”
Zim was in conversation on the stage with Aaron Grosky, president and COO of Dreamscape Immersive and COO of Dreamscape Learn.
Grosky said, “Gameplay, at its core, is about narrative. I think about owning the structure and taking you on a journey and trying to direct you and make you see things and have things going on around you, but not in having agency and interactivity, because then you lose control of the story and where we’re trying to take you.”
The pair talked about world building, cross pollination of IP, the benefits of interactive VR in education, and the possibilities for creating new forms of storytelling with virtual production.
They also addressed how the next generation of talent will change the creative focus of the industry: Zim said, “When I when I speak to young people who are interested in getting into the creative business, whether it’s movies, television or interactive, I always try to point them towards where we’re seeing real growth in terms of product and consumer behavior.
“So designing for programing, for building, for artwork and for interactive platforms that can hopefully have a storytelling element to them is really the exciting space to be.”
After AI, the Next Gamechanger Will Be Mixed Reality Wearables
TL;DR
Paramount Pictures futurist Ted Schilowitz says next-gen mixed reality devices are getting closer, and in time we will all be using them in the same way we use smartphones today.
When we get there the future of entertainment will be more like the contiguous mixed reality experience of a Disney theme park, says Schilowitz.
Schilowitz refuses to call this mixed reality experience the “metaverse,” which he says has already jumped the shark.
Wearables that will finally enable us to experience the mixed reality three-dimensional internet are coming. Paramount Pictures futurist Ted Schilowitz has seen them
“The ones that have a visual toolset in them, trust me, it’s coming,” he said during an interview at SXSW. “It’s the coolest part of my job. I get to see things way, way before they actually have consumer viability. So I can tell you that it’s coming. Get ready. Because somewhere in the future we’re all going to have something different than we have today to communicate with, to connect with, to socialize with, to have fun with.”
Describing his role as “a professional frog kisser — because you got to kiss a lot of frogs to find your prince or your princess,” Schilowitz explained that when he talks with tech start-ups and his own company’s R&D teams he asks why are you building this? What does it solve?
“And a lot of people don’t have a strong enough answer to get me excited about why they’re doing it. A lot of things that relate to this — that maybe a lot of you are pursuing in some fashion — is what we call the metaverse.
“I refuse to call it the metaverse anymore because it’s already fully jumped the shark.”
Schilowitz explained how the work of a futurist is really about mining the past and extrapolating trends. For example, where once the first flat screen HD TV’s cost thousands of dollars, now 4K giant home displays are commonplace.
“Just like a smartphone is no longer technology but a part of your biology. It is so linked to what you do and how often you hold it, touch it, connect with it, use it. There’s people multitasking on their biological devices, and that is not technology anymore.
“It is actually embedded in your physical and mental DNA. But we’re still using a form factor from generations ago.
“So when you hear a guy like Apple’s Tim Cook say augmented reality and mixed reality will be like this — that at some point in the future you will kind of not even understand how you lived without it.”
Schilowitz continued, “The perception of technology has got to a point where we’re starting to learn and remember where our eyes are and where our ears are. And they’re not in our hands, right? We will get to a point where it makes more sense to wear it, than hold it.”
The vision that Schilowitz has of the future of entertainment was previously imagineered by Walt Disney. Growing up in and around Disney World in Orlando, Schilowitz says the theme park where experiences are lived will be extended digitally into the wider world.
“If it’s done correctly, you don’t know where [the theme park] starts, where it ends. It’s just this joyous, amazing experience and it’s everywhere around you. And now we’re able to do that with technology.”
The start of this is with home VR, which the exec said was starting to be a profitable and viable part of the industry. The next step is trying to put VR into something as ergonomically pleasing as a smartphone, “whether it’s glasses or implants or whatever we all end up using, but it’s going to be closer to our eyes,” he said.
In addition, the way we capture images will have to change from flat 2D to one that includes depth, volume and true 3D.
“We need a different form of capture. [Current] 3D is an illusion of that. It’s only on one plane of volumetric capture. There’s a lot of companies doing this and it’s very early stage but it’s an important next step. Cameras will capture volume as well as high fidelity so that we can use them in the devices of tomorrow.”
HOLOPLOT CEO Roman Sick and 7thSense Design Director of Marketing Eric Cantrell joined Deluxe SVP of Innovation Richard Welsh to explore and explain tech that enable the MSG Sphere and venues like it.
May 1, 2023
IABM Reports “Mood Darkening” for Media Technology Vendors
Watch the IABM’s “State of the Industry Breakfast Briefing” from the 2023 NAB Show.
TL;DR
NAB hosted IABM executives to discuss the findings of its latest industry benchmark report at the 2023 NAB Show.
Business confidence in MediaTech is slightly down from its peak in 2022 though it is still quite positive, the IABM reports.
Scarcity of resources, and particularly of talent, remains a growth barrier for MediaTech businesses, and is influencing investment.
Economic headwinds have negatively affected business confidence and M&E business models, the IABM reports. Its latest research into the media tech industry says investment is being rationalized with the priority put on cost reductions. Watch the IABM’s State of the Industry Breakfast Briefing from the 2023 NAB Show in the video at the top of the page, and read the full IABM reports, “The State of MediaTech 2023” and “The State of the Industry Breakfast Briefing.”
“The mood is darkening in the industry compared to the last two years,” said Lorenzo Zanni, lead research analyst at IABM, in conversation with NAB Amplify. “It’s still better in a better position compared to the COVID period but macro headwinds are influencing the industry in a variety of ways.”
Watch “NAB Amplify with Lorenzo Zanni of IABM” at the 2023 NAB Show.
Macro headwinds such as inflation have already negatively influenced Media & Entertainment business models, including ad-funded, subscription-based and content models.
These challenges are evident in the number of companies making staff redundant. As of Q1 2023, 528 technology companies (about 50% of the 2022 number) had laid off about 95% of the total number of people laid off in 2022.
Although the overall media tech market for broadcast is valued by the IABM at $67 billion, major technology stocks have recently declined by 20% from a peak in December 2021 to Q1 2023.
As a result, the sector is focused on achieving greater efficiency. That translates into investment in media technology with organizations more likely to purchase AI/ML tech for automation ahead of technologies that might deliver a more immersive experience for viewers.
From an investment perspective, the IABM research finds a slowdown in cloud investment compared to 2020-2022. Zanni speculates, “This may have been driven by a re-evaluation of cloud spending by media businesses that heavily invested in it out of necessity in that period, which is consistent with anecdotal evidence we have gathered.
“Conversely, we have seen a rise in hardware and services investment by media businesses — these bounced back compared to 2020-2022. The uptick in services is consistent with a positive outlook for outsourcing, which may be driven by media businesses attempting to reduce costs and circumvent talent shortages.”
Another trend is scarcity, both in terms of supply chain components, which continues to disrupt the the industry and send prices higher (65% of media tech suppliers reported moderate to severe supply chain issues in Q1 2023 — significantly down from the 97% reported last year), and in talent. In fact, the labor shortage has gotten worse year-on-year, the IABM finds. Eighty-seven percent of respondents said that it is difficult or very difficult to recruit technical talent at the moment, with 69% saying that the situation has worsened in the last three years.
“Companies are telling us that they’re finding it very difficult to recruit talent at the moment,” said Zanni. “It is not up to single companies to solve that. It’s a complex issue and we will be publishing a report on that post-NAB, talking to some stakeholders to try to find some solutions about that.”
The IABM was keen to strike an optimistic note. That while macro headwinds undoubtedly represent a challenge for MediaTech businesses in 2023, they could also be an opportunity for some.
NAB Show Announces Recipients of Excellence in Sustainability Awards
Watch the full NAB Show session above.
TL;DR
NAB Show has named the recipients of the 2023 Excellence in Sustainability Awards during a live awards ceremony April 16 at the 2023 NAB Show in Las Vegas.
The Excellence in Sustainability Awards, which is supported by Amazon Web Services, recognize individuals, organizations, and products/services for outstanding innovations in media technology that promote conservation and reusability of natural resources and foster economic and social development.
An independent panel of sustainability experts selected winners in one category each for The Sustainability Champion Award, The Sustainability in Leadership Award, and The Sustainability in Product or Service Award.
The honorees were announced during a live awards ceremony April 16 at the 2023 NAB Show in Las Vegas.
The Excellence in Sustainability Awards, which is supported by Amazon Web Services, recognize individuals, organizations, and products/services for outstanding innovations in media technology that promote conservation and reusability of natural resources and foster economic and social development.
An independent panel of sustainability experts selected winners in one category each for The Sustainability Champion Award, The Sustainability in Leadership Award, and The Sustainability in Product or Service Award.
“The 2023 NAB Show Excellence in Sustainability Awards honor leaders who have influenced their teams to achieve a more sustainable pathway, organizations that have launched or completed sustainability initiatives, and products or services that significantly improve sustainability or provide sustainable market alternatives,” said NAB Executive Vice President of Global Connections and Events Chris Brown. “We’re beyond grateful for AWS’ support of this award program. This year’s winners can help the content industry meet the critical environmental challenges of today and tomorrow while addressing all stages of the content lifecycle.”
“Amazon is dedicated to net-zero carbon emissions by 2040, and we are proud to join NAB in recognizing individuals and entities that are championing sustainability,” said Marc Aldrich, General Manager of Media & Entertainment at AWS. “Congratulations to the winners for their ongoing commitment to seeing the media technology industry flourish in a sustainable manner now and in the future.”
Proceeds from the inaugural Excellence in Sustainability awards will go to Creative Visions and AWS.
This year’s winners are:
The Sustainability Champion (an Individual) Chris Brähler, Vice President of Product, SDVI for a small organization. Larry O’Connor, Founder and CEO, Other World Computing for a medium organization.
The Sustainability in Leadership (an Organization) Mrs. Greenfilm for a small organization. Amino for a medium organization. Media.Monks for a large organization. Greening of Streaming for a non-profit organization.
The Sustainability in Product or Service Award SmartFM, WorldCast Systems for a small organization. X Platform, Appear for a medium organization. CableOS Broadband Platform, Harmonic for a large organization.
In addition, NAB Show recognized the following companies with Honorable Mentions: Barco, Broadpeak, EVS, IBM, Immersion Room, Varnish Software.
FAST Is “TV on Steroids” as Content, Channels, and Revenue All Go Hard
Watch the full NAB Show panel above.
TL;DR
A panel of FAST channel execs — including from Fox-owned Tubi, Paramount-owned Pluto, Lionsgate, and Chicken Soup for the Soul — debated content strategy, how to utilize targeted advertising effectively, and content discoverability on the NAB Show stage.
This is set against the backdrop of global revenues from FAST channels, which are set to reach $6.3 billion in 2023 and $12 billion by 2027.
One executive described the FAST industry as “television on steroids.”
With revenue from Free Ad-Supported Streaming TV (FAST) channels set to exceed $10 billion by 2027 in the United States alone, there’s a lot riding on getting the go-to-market strategy right.
Executives in charge of four of the leading FAST services convened at the 2023 NAB Show to give a 360-degree overview of the sector and where it is headed.
Among them was Paramount Streaming EVP of Content Strategy and Global Partnerships Amy Kuessner; Tubi Chief Content Officer Adam Lewinson; Lionsgate President of Worldwide Television Distribution Jim Packer; and Chicken Soup for the Soul Entertainment’s Chief Revenue Officer, Philippe Guelton.
“Our vision, is that the future of streaming television is going to be predominantly on demand,” Lewinson said.
Guelton described FAST as “television on steroids” adding, “there’s so much more of it that can now be surfaced. I think you really have to look at the entire Connected TV ecosystem of FAST, AVOD and VOD.”
Globally, FAST channels will generate $6.3 billion this year, according to research from Omdia. With the largest market of FAST channels, 80% of revenue is expected from the US; however, Omdia notes that the UK, Canada, and Australia are also growing strongly. In fact, the worldwide market for FAST will see revenue triple between 2022 and 2027 to reach $12 billion.
Maria Rua Aguete, senior director at Omdia, added a note of caution: when the $12 billion is viewed in the wider context of online video, “social video remains the growth story for the next five years,” she said. “FAST channels are another window to monetize content, but not the only one.”
“We’re in a transitional period and traditional linear has been in secular decline for a long time,” added Lewinson. “We’re just delivering a better solution for TV viewership and one that can heavily lean into personalization.”
The panel talked about the different demographics attracted to watch their FAST offerings.
Chicken Soup’s Guelton said, “We tremendously over index in Hispanic and African-American audience. I want to say the general market for African-American television viewing is 20%. We’re at 30% and it might be even be higher for us. We were one of first platforms to roll out an entire Hispanic offering.”
Paramount owns Pluto, one of the FAST pioneers. Kuessner said, “We’re actually a little bit more towards the 40-year olds [because] we’re because we’re so broad with classic and nostalgia programing.”
Half of Tubi’s TV’s viewers identify as multicultural and around a third of it audience is in that coveted 18-to-34 age range.
“What’s really interesting, we see this every day, and probably one of the many reasons why we’re still very heavily into VOD as opposed to FAST is so much of our audience grew up on YouTube or to some extent grew up on Netflix,” said Levinson.
“They like the on demand,” he added. “They like our personalization tools. They’re used to navigating the algorithms.”
In a wide-ranging discussion, the group touched on topics including ad volume, media buying, programming, distribution partnerships, and how consumers best navigate thousands of channels available to them.
“We look out over the next three years and try to figure out what are the shows that are going to really resonate,” said Lionsgate’s Packer. “We’ve really started to get much more tactical and strategic. Licensing is still the primary business [for us].”
All agreed that human curation was vital to build content profiles alongside an element of automation. Experimentation in channel brand and content was also part of the mix.
“It’s that whole art and science, the creative judgment that goes into the channel and then it’s rigorous data analysis,” said Kuessner.
Originals remain a very important part of the FAST ecosystem. Tubi has roughly 100 originals on its channels, “and we’re driving tremendous engagement,” Levinson said. “It’s often that original be the number one title on the platform. One of our proprietary FAST channels is called Tubi Originals.”
Pluto is taking a different approach. “We don’t necessarily see original as a must-have on our platform and that’s mainly because we’re owned by Paramount [which sends its originals to Paramount+],” said Kuessner. “We are looking, though, to get a little bit more creative in terms of how we work with Paramount+ So, for instance, we just announced that every new season that comes out on Paramount+ will debut in the entire former season on Pluto TV.”
As the FAST space explodes, a new report projects that by 2025 ad spend on FASTs will surpass that of cable, broadcast or SVOD services.
April 30, 2023
Posted
April 28, 2023
It’s a New View of Premium Content As IMAX Goes Beyond the Big Screen
Watch the full NAB Show session above.
TL;DR
The “IMAX Beyond the Big Screen”panel at NAB Show 2023 discussed how IMAX is bringing content to the home screen and how content owners and streaming services are creating high quality, engaging video experience.
Explains ambition to deliver top-tier content at a premium to consumers and teases idea of an IMAX-branded sports streaming service.
Talks about how new acquisition SIMWAVE will help the company optimize its IMAX Enhanced content to consumer devices, but that it will not dilute the quality of experience just to achieve bitrate savings.
In IMAX’s first-ever official appearance at NAB Show, the company explained how it aimed to take its premium brand of giant screen content into the home.
IMAX’s presence was led by Vikram Arumilli, SVP and GM of Streaming and Consumer Technology.
In the “IMAX Beyond The Big screen” panel, Arumilli explained that IMAX worked with device manufacturers including LG, TCL, Hisense, Philips and Sony to ensure that titles badged as IMAX Enhanced are shown on the highest premium tier of devices.
“All of these devices have very stringent technical specifications that they have to hit. These include TVs, home projectors and sound soundbars speakers. On the content side, we deliver IMAX Enhanced versions of titles to our streaming partners including to Sony’s Bravia TVs and Disney+”
IMAX has around 700 titles available as part of its Enhanced offer.
“First and foremost, IMAX’s bread and butter is still in theatrical and the IMAX Enhanced titles that are coming to market are primarily titles that have gone through our theatrical footprint first and then on a streaming service.”
He said, “IMAX is where it is today is because we’ve really focused on delivering to filmmakers that ability to have their creative intent seen by consumers in the highest level. And that’s what we want to do in the home as well. That ties into why we made this acquisition in SSIMWAVE. It’s all about maintaining that creator intent, whether it’s theatrically or in the home or on your personal device.”
IMAX acquired Canadian company SSIMWAVE for $25 million last year. Its products include technology that optimizes ‘bandwidth spend’ and so enable service providers to deliver content at quality in the most efficient way.
“The goal of IMAX Enhanced is to drive a premium experience for streaming in the home. There’s a clear synergy between what SIMwave is doing around optimizing quality of streaming content and what we’re trying to do with Enhanced.”
Also on the panel was Bill Baggelaar, EVP and CTO at Sony Pictures Entertainment, who said, “The hardest part is ensuring that we are providing the consumer the right quality. This is where SSIMWAVE is an interesting technology that can really help the industry to better understand what’s actually getting to the consumer. We can see…how close we are getting to that experience that we are trying to really offer.”
Arumilli pointed to “bandwidth constrained” markets like India as a potential opportunity for the service. “Given that subscriber spend is pretty low, there’s a real opportunity there to allow these services to continue to deliver that really premium experience, but do so in a way that is financially responsible for them and allows them to scale their businesses in a profitable way,” he said.
He claimed that IMAX Enhanced was already driving engagement and retention for service providers but was not yet in a position to charge consumers more for specific IMAX Enhanced content.
“We’d love to be in a position where IMAX Enhanced is part of a premium tier, but those are discussions that are much more long term. You could see that — with what HBO Max is doing by offering a $20 tier with access to higher quality content, that’s somewhere where we want to play eventually.”
He also aired the possibility of a IMAX branded sports streaming service. “When you look at a lot of the feedback around live sports today there’s always complaints about video issues, audio issues, buffering issues, just general quality issues. We want to figure out how to solve those issues and make the experience for consumers more of that IMAX experience.
“We don’t have a clear, clear answer there yet, which is why I can’t get into more detail. But it’s something that we think there’s an opportunity. There’s a big market sports content in the US and in countries like India which is the home of IPL Cricket.”
The conundrum is how IMAX gets its service to the mass market versus keeping it high quality by potentially trading off lower bit rates in exchange for wider device distribution.
Arumilli stressed that his company won’t water down the IMAX streamed experience.
“IMAX has always stood for premium quality. Even looking back at how we build our theaters, each one is built to spec. There’s no plug and play IMAX theater that you drop into a multiplex. It’s all about looking at the architecture of the theater, building it to spec,” he said.
“We don’t claim and we never will claim that the IMAX Enhanced experience is similar to what you get in our theaters. That’s a completely separate experience. What we’re trying to do is figure out how you bring the best out of what we do theatrically into the home, how do you best replicate that, even though you’ll never hit that bar.”
He added, “We’re not going to have the IMAX enhanced brand on every $250 TV in Walmart because if we do that, then you lose what we what we mean to consumers, you lose what we mean to creators and you lose what IMAX stands for. So for us, that’s a pretty clear line that we won’t cross.”
What Are the Human Implications of AI and Creativity?
TL;DR
A panel of artists discusses the“human” implications of generative AI and how AI impacts all of our jobs and lives in the creative arts.
Generative AI is deemed to de-valuecreativity and will lead to a morass of artistic mediocrity and conformity. Human artisanal works, by contrast, will be perceived as having higher monetary and artistic value.
Should artists physically document their creative process to prove the work was achieved with human endeavor?
Generative AI will iron out diversity, human error and human effort, leading to a disastrous homogenization of culture that devalues the content, claim artists including Grammy Award-winning musician Alex Ebert and digital artist Don Allen Stevenson.
“I’m going to be that voice [which says] it’s going to diminish the quality of our artistic output,” said Ebert. “There’s a very strange inverted relationship between democratization of taste and homogenization of output.”
He decried the idea of artists “reduced to simply a [human] being that prompts” an AI to create.
Ebert is the lead singer and songwriter for the American bands Ima Robot and Edward Sharpe and the Magnetic Zeros. He also scores films and won a Golden Globe Award for composing the music to 2014’s All Is Lost.
Tweaking a movie with test screenings is a long standing tactic by studios to “correct” by recutting a film before release. Generative AI could radically amp up that process — to Ebert’s horror.
“It’s suddenly like every other movie you’ve ever seen because the process of democratization actually leads to homogenization. You end up with a median [average] opinion. And I’m afraid that that is just what’s going to happen.”
Don Allen Stevenson, a multidisciplinary digital creator and crypto artist, agreed and thought that cultural homogenization would send artisanal creations onto a higher artistic and financial level.
“If everyone is able to generate so-called ‘high quality,’ AI-driven art, video or music and if the cost of those things is only represented as digital assets, it will reduce the quality. But I think simultaneously it would increase the value of physical things that are more tangible.”
This is one theme of the metaverse bible “The Diamond Age” by Neal Stephenson. In his science-fiction there are 3D printers that can print anything on demand from a text prompt.
“So it made the cost of materials very low,” said Stevenson. “But then what people seem to value in this example was stuff that was handmade. They loved like that. The elites and the rich in this novel loved their handcrafted things because they were truly unique.”
“That’s why distortion became interesting,” said Stevenson referring to the idea that much of what we appreciate about art stemmed from a mistake.
“Constraints are how originality occurs. So where are the limitations of AI? And when we find the limitations of AI it might become interesting. The only cool things I’ve seen AI spit out so far [is when] the AI fails — where it can’t do fingers and it makes all these weird images and where ChatGPT is spitting out nonsense.
Ebert said he doesn’t use AI to produce music. “It’s honestly not that much fun. It’s quicker and more productive, but it’s not as interesting for me. I don’t reach interesting limitations. I don’t end up with an interesting sound that you could never recreate because of the reflections in the given room [or the way I’m playing a particular instrument on that day in that room]. “These constraints, these failures are important.”
So, where are the constraints and failures of AI that will be interesting enough to forge your own path apart from it?
He argued that humans still have an affinity for the idea of an object or piece of content with tangible origins.
“If we see something artisanal we’re like, ‘that was made by hand and it’s a one of a kind and it makes me feel special because it is special.’ But in order for that to happen, you have you have to have a sense of a tangible origin.”
Yet, we’re so beguiled by imitations of tangible origins? “We’ll buy the pre-ripped jeans, we will buy an experience of struggle. We’re buying the thing at Urban Outfitters that looks like it’s from Peru and made by hand but [in fact] it’s mass produced to look like it’s from Peru.”
Following on from this, the panel pondered whether proof of human craft in producing art would be required in order to validate its artisanal value in the age of AI.
Stevenson suggested an artist could document their process to “show what creative human made decisions were made, what was the intentionality, what was the heart?
“And if there were a legal structure that could look at that when judging the output, like how much human level work went into that generated thing.”
Stevenson added that he’s been encouraging people to live stream and document and record voice memos in the process of creation to act as a chain of proof.
“Have people interview you as you’re making whatever art form you’re making and then have that be a part of the art piece, have that be the story.”
He continued, “Humans are very narrative based. We love story. So, if you don’t have a story that shows that you put human level love, energy and heart into that thing, then you’re just an AI generated automation, homogenous nonsense. But if it’s like, Wow! this person put a lot of actual blood, sweat and tears into this and we can measure that, we can record that, then maybe [that might work].”
Csathy summed up, somewhat fatalistically, “I want to believe that no matter how sophisticated AI gets, there’s something about humanity that will be appreciated and that will be differentiated, so we convey the humanist aspects of it. We all just have to be very stoic about the fact that this is happening.”
Thousands of AI tools will be developed, each of them performing a specialized function so that we don’t have to, says Signal and Cipher CEO Ian Beacraft.
One result will be the birth of “just in time skills,” which takes its cue from the “just in time” manufacturing process.
Soon most content we consume will be synthetic. It’s a logical leap from Alexa and Siri to conversing with our avatar, and even having feelings for it.
A lot of people are afraid they’re going to lose their jobs to a machine, but we won’t. We’ll just lose our job descriptions.
“Generative AI is digitizing skillsets, making them programmable and upgradeable,” says Ian Beacraft, CEO of Signal and Cipher, in a SXSW session. “As a result, a new class of generalists will dominate the era of generative AI.”
With expertise and experience no longer needed to perform with proficiency, those people with a breadth of experience “and passionate curiosity” will rise to the top, says Beacraft.
“With AI, individual creators can become armies of one.”
This will flip the corporate world on its head, he says.
“I believe that this is the moment of the greatest revolution of knowledge work in human history.”
He takes his lead from the Industrial Revolution of the 17th and 18th centuries, when many manual jobs were mechanized.
“Now we’re doing the same thing with the mind. When you extract the need for labor to be present for something, you free it to do other things like manage process, enhance the product, or even think of other ideas.”
That’s a simplistic reading of the Industrial Revolution, which enslaved hundreds of thousands of people to machines and passed newfound wealth and power into the hands of a small cadre of capitalists. Did your average factory worker suddenly enjoy the freedom to reinvent their lives and forge new ideas? No, they were too busy trying to put bread on the table to feed their families.
Anyhow, the AI revolution won’t do that, will it? Not according to Beacraft, who is an AI booster.
He thinks that hundreds or thousands of AI tools will be developed, each of them performing a specialized function so that we don’t have to. That’s why humans will become generalists in whichever industry they’re in, because our new role will be to curate, tweak and stack AIs to do particular jobs.
“This is the era of the creative generalist. We come from a space where we are all tooled to specialize from early age. When you’re specialized you build your expertise, become an expert, and become indispensable in that space,” he says.
“Well, now we’re in an era where AI can outpace any one individual very quickly in a specific domain. The idea of specializing early [in your career] can actually be a detriment. Those who have expertise and depth in several domains and interests and passionate curiosity across a broad swathe, are the people who are going to dominate the next era.”
One result will be the birth of “just in time skills,” which takes its cue from the “just in time” manufacturing process.
“If I need to for a minute put my copywriting hat on, I can do that. If my project manager’s out for a minute and I need to their job for a moment, I can use my tools to just slide in that direction. All of a sudden I have the capability to jump into a role as needed and I have the expertise I need to perform that role on demand.”
This is going to be the big change in corporate organization. “When your organization no longer expects incremental growth in a specific role, but you have teams of people working horizontally with a strong depth of expertise in a particular area. All of that new capability is net new, not incremental.”
The next step is that these AI tools will actually start to learn how to use other AI tools themselves.
“Even our relationships will change,” Beacraft insists. “In a time where our behaviors are guided by algorithms, and humans become more machine-like, machines are becoming more human. AI companions provoke emotions and elicit feelings of romance, while children are less concerned with whether their friends are real or synthetic.”
Soon most content we consume will be synthetic. It’s a mere extension from Alexa and Siri to conversing with our avatar.
“While so many of us would like to say I can’t be tricked into having feelings for an AI, there are people who would be beg to differ.
“Have you ever cried or yelled at a movie or a book and even stuffed animals as a kid? We already develop bonds with other species — dogs, horses, cats, our pets, fictional characters so why would AI be any different?”
Yves Bergquist, director of ETC@USC’s AI & Blockchain in Media Project, on generative AI and what it means for the future of production.
April 25, 2023
Posted
April 11, 2023
The Future of Production Amplified: How ETC@USC is Helping Hollywood Embrace Generative AI
TL;DR
Yves Bergquist, Director of the AI & Blockchain in Media Project at ETC@USC, discusses generative AI, how it’s being used, and what it could mean for the future of film and television production.
Generative AI has the power to automate the creative process, allowing for the production of high-quality content at an unprecedented rate, and can be used as a creative assistant for storyboarding, look book schematics, and even how projects are pitched.
Generative AI is being seriously explored and developed in the pre-production to post-production process, and Bergquist predicts that it will radically transform virtual production and in-camera visual effects.
The transformation generative AI represents in the M&E industry is enormous, and while it will enhance the creative process, it’s a new language that creatives need to learn to speak.
Videos in “The Future of Production Amplified” Series:
Generative AI has quickly become one of the most talked-about technologies in the Media & Entertainment industry, and for good reason. This groundbreaking technology has the power to automate the creative process, allowing for the production of high-quality content at an unprecedented rate. From video games to feature films, generative AI is revolutionizing the way content is created. However, while it can speed up production and open up new creative possibilities, it’s not a replacement for human creativity and intuition. To unlock the full potential of generative AI, and stay ahead of the wave that is transforming the M&E landscape, industry professionals must understand not only its capabilities but also its limitations.
As part of our series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with Yves Bergquist, Director of the AI & Blockchain in Media Project at the Entertainment Technology Center at the University of Southern California, or ETC@USC, about generative AI, how it’s being used, and what it could mean for the future of film and television production.
Bergquist brings deep experience as an AI researcher to his role at ETC@USC, where he and his team use frontier AI methods to help media companies develop insights into their content, their audiences, and the cognitive relationship between the two. In addition, Bergquist is a member of the Digital Storytelling Lab at Columbia University’s School of the Arts, and the co-chair of SMPTE’s Joint Task Force on Artificial Intelligence Standards in Media, where he helps drive the standardization of AI methods throughout the media industry.
We should think of generative AI as a creative assistant, Bergquist suggests, noting that the burgeoning technology will be most disruptive to the pre-production phase, revolutionizing storyboarding, look book schematics, and even how projects are pitched. “You can literally create a very sophisticated first draft of your of your film or TV show” now with generative AI, he says.
Virtual production and in-camera visual effects will also be radically transformed through the use of neural radiance fields, he predicts. “That is a little bit further out because it’s a little bit more technically sophisticated, but definitely coming and massively disruptive for post-production, where a whole bunch of workflows that used to take weeks can now take just a few minutes or hours.”
In Part 1 of this exclusive Q&A, Bergquist covers the basics of generative AI — what it is and what it isn’t, and the kind of work it performs best. He also talks about the radical impact it’s having on the Media & Entertainment industry, walking us through the various stages of the content creation value chain, and how the true disruption is the increase in production value available to content creators of all levels, from filmmakers and showrunners to TikTok influencers.
Watch Part 1 below:
In Part 2, Bergquist dives into the various ways independent filmmakers and studios are using generative AI. “We’re at a point where the experimentation phase is ending and the actual use of these methods in the pre-production to post-production process is being seriously explored and developed,” he says.
Bergquist also discusses the disruptive potential of generative AI on marketing and SEO. “We’ve seen a pretty extraordinary capability of generative AI to output entire websites, create entire blog posts,” he observes. “The SEO game is all about volume, and so the ability to create an enormous volume of blog posts, website content, just general internet content at the push of a button is extremely disruptive for the SEO industry.”
Watch Part 2 below:
In Part 3, Bergquist shares what he sees for the future of generative AI, naming the winners and losers inside this new ecosystem. “There is no future where AI doesn’t destroy jobs,” he comments, noting that the transformation generative AI represents in the M&E industry is enormous. “Twelve months, 18 months from now, we’ll probably look at a VFX workflow that is pretty radically different than what it is today.”
To prepare themselves, Bergquist says, creatives need to educate themselves in the new technology as much as possible. “Embrace it, dive into it, try it. You know, become expert at it in the same manner that text processing and word processing took over from typewriters. This is a new set of tools that will enhance the creative process. But it’s a new language and you have to learn how to speak it.”
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
Katrina King, global strategy leader for content production at AWS, on the impact of cloud production on the future of film & TV production.
April 17, 2023
Posted
April 10, 2023
NAB Show LIVE Stream Is… Live!
TL;DR
Watch the live stream of the 2023 NAB Show from 9 a.m.-6 p.m. (PT) at NABShow.com, April 16-19.
The stream will focus on the floor experience, featuring exhibitor interviews, demos, new product spotlights, Experiential Zone experiences, and other on-floor events.
Select Main Stage sessions will also be available to view on demand after the show.
NAB Show will again live stream highlights from the 2023 exhibit floor and main stage in the Las Vegas Convention Center, starting Sunday, April 16.
Those unable to travel to Sin City for this year’s show can head to nabshow.com for a daily view of what they’re missing. From 9 a.m to 6 p.m., viewers can tune in to see exhibitor interviews, demos, new product spotlights, Experiential Zone experiences, and other on-floor events.
“This year’s focus is to bring spontaneous ‘from-the-floor’ experiences while covering exciting industry announcements and celebrating NAB’s Centennial,” said Salazar.
Broadcast Beat Studios will produce this live stream as a remote production, with the crew split between Vegas and and Broadcast Beat’s central control room in Ft. Lauderdale, Fla. Two camera crews will transmit their four video feeds back to Florida to be edited into the NABShow.com stream.
NAB Show Main Stage panel “Fast Channels and Furious Growth” features Amy Kuessner, Adam Lewinson, Jim Packer, and Bill Rouhana discussing FAST challenges.
April 9, 2023
Posted
April 7, 2023
Why It’s Time for Hollywood to Think Seriously About AI
TL;DR
After a cautious approach to ChatGPT-type products, guilds and creators are becoming more vocal about limiting AI’s influence in entertainment.
From writing and directing to producing and marketing, AI is being used in various ways to make Hollywood more efficient and effective.
However, with these advancements come potential risks and challenges, such as the loss of creative control and the homogenization of output.
AI is being introduced to the creative industries at pace and at what some see at the risk of loss of control. On the one hand, ChatGPT, Midjourney, DALL-E and others, are being marketed as tools to aid the creative process by speeding up time-sapping processes and providing a spark for ideation.
Not everyone has bought into this narrative, however, and now writers are following artists in speaking out against the wholescale introduction of AI without due consideration for its impact.
“After a cautious approach to ChatGPT-type products, guilds and creators are becoming more vocal about limiting AI’s influence in entertainment,” reports J. Clara Chan in The Hollywood Reporter.
Creators like Cassey Ho, who’s behind the popular fitness brands Blogilates and Popflex, say they’re wary of supporting AI tools that can easily exploit the work of artists.
“I like the idea of it being a co-pilot, but when it’s riding off the backs of creatives, I don’t feel good about it,” Ho said at SXSW, as reported by THR.
The same anxieties around credit and compensation extend into the inner workings of Hollywood, where unanswered questions about AI’s ability to transform the future of entertainment have already informed discussions at unions like the Writers Guild of America and SAG-AFTRA as writers and actors, among others, seek to protect their work and right to compensation.
“Human creators are the foundation of the creative industries and we must ensure that they are respected and paid for their work,” SAG-AFTRA said in a statement on March 17. “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creative works, or professional voices and likenesses, without permission or compensation. Trustworthiness and transparency are essential to the success of AI. SAG-AFTRA will continue to prioritize the protection of our member performers against the unauthorized use of their voices, likenesses and performances.”
The Writers Guild is also in the midst of negotiations with studios around the use of AI in the writing process, likening tools like ChatGPT to research material like, for instance, Wikipedia. “The WGA’s proposal to regulate use of material produced using artificial intelligence or similar technologies ensures the Companies can’t use AI to undermine writers’ working standards including compensation, residuals, separated rights and credits,” the guild wrote.
Earlier this month, the US Copyright Office declared that AI-”assisted” works could be eligible for copyright protection. It stated: “Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material.”
Yet this hasn’t assuaged many creatives.
“There’s a fine line between when is something inspiring someone versus when is someone just ripping off or absolutely treading protected intellectual property,” insisted Candle Media’s chief development officer, Brent Weinstein, at SXSW. “AI is going to force us to examine that fine line and rules will be written, and we will all adapt to a new world order.”
Writers won’t be the only ones affected by this new trend. Directors should also be concerned, writes Jason Hellerman at No Film School.
On the positive side, AI could be used to create virtual sets, which could help directors visualize scenes and make decisions about camera angles and lighting before filming begins. AI could also be used to analyze and edit footage, making the post-production process more efficient and cost-effective.
However, AI could potentially replace human directors altogether. “We would instead have computers trying to tell us about the human experience or estimating emotions they are not complex enough to feel. This could lead toward an overreliance on tropes or the points of view of the people who created the AI, which may not be reflexive as a whole.”
When it comes to producing, AI could be used to help producers with tasks such as predicting audience response, optimizing marketing strategies, and even identifying potential investment opportunities.
AI algorithms could analyze audience data to predict which types of films or TV shows are likely to be successful, helping producers make more informed decisions about what projects to pursue. AI could also be used to analyze marketing data and make recommendations about how to reach and engage audiences more effectively.
“In reality, this kind of intelligence might completely eliminate producers,” says Hellerman. “Who needs someone to make calls to package when a computer can send form emails to agents or use its metrics to decide which projects it should be greenlighting?”
To underline the point, Hellerman reveals that the article under his name was largely written by AI, albeit tuned and polished by the author. ChatGPT even mimicked the No Film School website format.
From writing and directing to producing and marketing, AI is being used in various ways to make Hollywood more efficient and effective. “However, with these advancements come potential risks and challenges, such as the loss of creative control and the homogenization of output,” Chan suggests.
The fact is, contends Hellerman, “when giant corporations buy a bunch of Hollywood companies, they are looking for ways to strip the movie and TV process down. How can we employ fewer people and maximize profits? Well, I think they will do it with computer-generated stories and positions.
“That spells less creativity and originality and work for us all.”
“Generative AI: Coming Now to a Workflow Near You?” will feature Yves Bergquist and Pinar Seyhan Demirdag, discussing how generative AI tools can help M&E.
April 7, 2023
NAB Show: Generative AI, Bringing Together the “Why” and “How”
TL;DR
Generative AI (think ChatGPT and DALL●E) is poised to change the media and entertainment industry in myriad ways.
Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag will discuss how creatives can use generative AI tools to facilitate their work today at a NAB Show Create session on April 17 at 3 p.m.
A NAB show panel discussion aims to separate the hype from the “how” and “now” of generative AI for M&E.
This panel, featuring AI & Blockchain in Media Project Director Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag, will discuss how generative AI tools can help the media and entertainment industry in 2023, and consider how this technology might disrupt and augment workflows in 2024 and beyond.
Discover where Bard, Whisper, and Dall●E might fit into your creative process, and learn about other AI tools that could soon automate microworkflows at a desk near you.
A NAB Show Conference Pass is required for this session. Register here.
Speakers
Yves Bergquist is a data scientist and the director of the AI & Neuroscience in Media Project at USC’s Entertainment Technology Center, where his team helps the entertainment industry accelerate the deployment of next-generation analytics standards and solutions, including artificial intelligence.
He is also the CEO of AI engineering firm Novamente, which applies neural-symbolic artificial general intelligence to large enterprise problems. Novamente is the AI developer behind Hanson Robotics’ “Sophia.” His team also built the world’s very first fully autonomous AI-driven hedge fund, Aidyia, which is now defunct.
Before Novamente, Bergquist managed business development at analytics firms Bottlenose and Ranker in Los Angeles. He was part of the founding team at Singularity University, a joint venture between Google and NASA.
Pinar Seyhan Demirdag is an A.I. director, multidisciplinary creator, visionary, an outspoken advocate for the conscious use of technology, and an opinion leader in generative A.I.
In 2020, Demirdag and Gary Koepke founded Seyhan Lee, which has become the bridge between generative AI and the entertainment industry. Seyhan Lee created the first generative AI VFX for a feature film (“Descending the Mountain“) and the first brand-sponsored generative AI film (“Connections/Beko“).
In 2022, they announced Cuebric, a tool that combines several different AIs to streamline the production of 2.5-D environments for virtual production stages.
The panel will be moderated by NAB Amplify Senior Editor Emily M. Reigart.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
ChatGPT poses a fundamental question about how generative artificial intelligence tools will transform the workforce for all creative media.
June 23, 2023
Posted
April 6, 2023
Who Should Own the Data That You (as a Human) Are Generating?
TL;DR
At SXSW, Brittany Kaiser, co-Founder of the Own Your Data Foundation, presented a framework for governments and legislators to help us all take back control of our personal data.
Web3 tools can be used to create a new digital architecture which secures our digital identities and gives individuals the power of consent.
The concept of digital identity means that we are able to use an identity that is not linked to our personally identifiable information in blockchain technology.
Each of us is producing exponentially more data than ever before but do we have any power of it at all? If data (outside health) is the most valuable asset we hold, shouldn’t we be more concerned about taking back control?
Big questions which Brittany Kaiser, co-founder of the Own Your Data Foundation, believes can be answered. Delivering a keynote address at SXSW, she urged a concerted effort to reset our relationship with the big tech ad machines governing our private information by embedding ownership of our data in Web3 technologies.
She’s not the first to chart our recent troubled history with data being siphoned off by Silicon Valley giants like Google and Facebook.
“Technology has been designed to be inherently extractive,” she said, “to extract as much value from individuals and pull that value up to the top of the supply chain, [where] multibillion dollar or even trillion dollar technology companies are mostly made up of our digital assets, our personal data, our behavioral data, everything about us.
“But somehow we, as the producers of that, don’t have any access to its value. How in this multitrillion dollar value chain do the producers of most of the value not really have access to that monetary value, let alone the process of the supply chain?”
It’s not just data on what we watch or what we shop for either. Even when we give permission for apps to work on our devices we’ve probably given them access to our calendar, GPS, your photos and videos – even have access to your camera and your microphone, even when you’re not using the app and when you have no trust basis with those organizations.
“This is why data rights is one of the most important topics in legislation, in regulation and human rights, in education, and of course, in the design of new technologies.”
Kaiser said there is a movement, of which she is part, to make technology more ethical with more individual empowerment, though admits the conversation at government levels has only just started.
She walked through the steps she thought needed to happen for us all to take back control. This begins with the ability to opt out which is now possible in Europe and being introduced in the US
The next step is consent and permission so that we understand and agree to the purpose to which our data is being used.
“We should be able to revoke that consent as well,” Kaiser said. “So the next step is accountability. A lot of the data architecture that is used in current technology has a lack of accountability, because if data is transferred, if data is shared or if data is deleted, often it’s not possible to tell that that has happened. Using current technologies, it’s very difficult to have that actual accountability.”
The next concept she talked about was ownership. Under most laws around the world, we do not own our personal information, she said. “Our personal information is either owned by the government or it is owned by the company that has collected it from us.”
All of this can be built using Web3 technologies to create a different data architecture.
“I really believe that blockchain technology has the ability to scale trust in a way that nothing else has been able to up to this point. In order to know that I can interact with anyone around the world without having to have a human trust between two people, we can build technologies that protect us so that I don’t need to trust the person I’m interacting with.”
Encryption, she said, can be used to make sure that our personally identifiable information doesn’t need to be shared unless we want it to be. It will ensure that every action we take online can be anonymized while our collective behavioral data can be used by companies and governments.
“The concept of digital identity means that we are able to use an identity that is not linked to our personally identifiable information in blockchain technology,” she said. “Hopefully we building as an industry enough tools where this is going to be very simple in the future.”
It’s not that data is by itself evil. “Big data can solve a lot of the world’s greatest problems,” she said. “That’s why most of the big NGOs, United Nations’ departments, governments, militaries, humanitarian aid organizations, all relying on large scale data sets and data science and data driven research. The more data we have, the more that we can see patterns, the more that we can predict what is going to happen before it does and intervene.
“So it is of the utmost importance that we as individuals, that our governments and technology companies, start to take these issues incredibly seriously so that we can make sure that the architecture of our digital lives starts to become more congruent with the ability for us to protect our rights.”
The $500 billion online advertising business is in crisis as the internet degrades the very resource it depends upon for its survival.
May 12, 2023
Posted
April 5, 2023
The Future of Production Amplified: Moving Middle Earth to the Cloud with “The Lord of the Rings: The Rings of Power”
TL;DR
Producer Ron Ames discuses the fully cloud-based workflow for the first season of Amazon Prime Video’s “The Lord of the Rings: The Rings of Power.”
“The Rings of Power” was the first production to intentionally use a cloud-based workflow from end-to-end, allowing assets to be available to anyone at any time within a secure network.
The workflow was developed using AWS S3 buckets, which housed assets for 12 global vendors and 1,500+ artists located around the world.
Blackmagic Design collaborated with Company 3 to develop a cloud-based color correction and finishing pipeline that allowed CO3 colorist Skip Kimball to color grade episodes remotely with real-time review and feedback.
Ames says cloud-based workflows have changed production and post-production forever, with the scalability of the cloud making it available to productions of all sizes.
Videos in “The Future of Production Amplified” Series:
As part of our series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with producer Ron Ames about the fully cloud-based workflow for the first season of Amazon Prime Video’s The Lord of the Rings: The Rings of Power.
Ames is an industry veteran with post-production credits going back to 2003, when he was enlisted by VFX supervisor Rob Legato to assistant direct and produce the visual effects for Martin Scorsese’s The Aviator. Since then, he has worked as a VFX producer, post-production supervisor, and first assistant director of special units on more than 20 productions including Avatar, The Departed, Star Trek Into Darkness and Star Trek Beyond, and Avengers: Age of Ultron.
In his current role as producer on The Rings of Power, Ames oversees all the technical departments from camera capture to exhibition. The Lord of the Rings franchise is known for its award-winning visuals that have pushed the art and science of visual effects, which meant that the stakes were high for the first season of the first episodic to emerge from Middle Earth. From the very beginning, the production team understood they would need an end-to-end cloud-based workflow to complete the first season, which contained a whopping 9,164 VFX shots shared among 12 global vendors and 1,500+ artists located all around the world.
“I think we are the first production to be fully cloud-based intentionally from the beginning,” Ames says, detailing the workflow and the impact it had on the creatives and extended crew. “As we discovered what that meant, it actually became an even a larger part of our show.”
For Ames, the surprising thing about a fully cloud-based workflow was that it meant that assets were available to anyone at any time, all within a secure network. “We hadn’t thought of that when we started,” he recounts. “That wasn’t something that we set out to do, but we found that it was useful and it became standard. All of these modern technologies are brand new and then, once they’re used, it becomes a standard. So the directors just expected it. All the artists on the show, the showrunners, everyone knew that anything was available at any time, wherever we were, as long as we had Internet. And that’s becoming the new standard.”
In Part 1 of this exclusive Q&A, Ames provides an overview of the cloud-based workflow developed for The Rings of Power, describing how assets were housed in AWS S3 buckets for multiple vendors to access at any given time. He also discusses how it would have been impossible for the series to have been made any other way, and how the ease and efficiencies cloud-based workflows provide are quickly becoming the new standard in film and television production.
Watch Part 1 below:
In Part 2, Ames dives into the details of producing The Rings of Power, naming the key players who helped make the project a reality, including AWS and VFX vendors ILM, Rodeo FX and Rising Sun Pictures, among others. He also details the collaboration between Blackmagic Design and Company 3, who teamed up to develop a cloud-based color correction and finishing pipeline that allowed colorist Skip Kimball to color grade episodes remotely with real-time review and feedback.
Watch Part 2 below:
In Part 3, Ames shares some of the biggest lessons learned from producing the first season of The Rings of Power, including the idea that not having materials on-prem was actually quite freeing. He also discusses how cloud-based workflows are changing production and post as a whole, and that the cloud’s main asset is scalability, making it available to productions of all sizes. “All the shows I’m working on now currently have some aspect of cloud-based production,” he says. “It is becoming ubiquitous.”
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
Containing nearly 10,000 VFX shots, post-production on the first season of “The Lord of the Rings: The Rings of Power” was enabled by AWS.
April 8, 2023
Posted
April 4, 2023
When It’s All an Action Sequence: Editing “John Wick Chapter 4”
TL;DR
Director Chad Stahelski wanted to work with an editor who came with no preconceptions about how a John Wick action film should be put together.
Editor Nathan Orloff talks about being able to accomplish a fantastic rhythm, but over a near three-hour run time.
Stahelski discusses cinematic influences including “The Good, The Bad and the Ugly” and MGM musicals.
With John Wick 2 and 3 editor Evan Schiff unavailable, franchise director and co-creator Chad Stahelski cast around for a new cutting room collaborator for Chapter 4. He alighted on Nathan Orloff (Ghostbusters: Afterlife), in part because Orloff had limited experience editing action movies.
“In my interview with Chad, we just really hit it off,” Orloff explains on the Next Best Picture podcast. “I found out many months later that one of the reasons he wanted to bring me on is because I don’t have extensive experience in action. He didn’t want someone to come in and do their thing that they’ve been doing on other action movies… because John Wick is sort of antithetical to how a lot of action movies are cut these days.”
To understand why, you have to appreciate that Stahelski’s vision for the fourth installment in the franchise was to expand the John Wick universe by bringing in multiple storylines and a longer run-time to let the action play out on screen, rather than having the editing dictate the action.
“The other films are very much like, you know, that John is on a direct rampage or running for his life. This film was intentionally designed to be more reflective and contemplating, that after his entire career as a hitman, he is forced to reckon with his past and what he’s done.”
Stahelski’s influences range from the lush visuals of Wong Kar-wei to the operatic staging of Sergio Leone westerns. As the director explained to Jim Hemphill at IndieWire: “I love the seventies movie style. I love four act operas. I love Kabuki theater. The Asian cinema kind of breaks a lot of rules that we adhere to in the three act version [of movies] and we’d like to think John Wick breaks a lot of those rules because we do go a little operatic.
“Lawrence of Arabia is a good example like that. That movie kind of flies by to me and it doesn’t feel like you need an intermission in it.”
The filmmaker’s homage goes so far as to mimic the famous “match cut” by editor Ann V Coates in David Lean’s Lawrence of Arabia in which Lawrence in profile blows out a match and Coates cuts to a blazing desert sunrise.
“I remember vividly when I went to set in Paris, Chad asked me ‘what’s the most famous cut in all of cinema?’ and said we’re going to do it our way,” Orloff relates to Next Best Picture. “I wanted to make sure we did the exact number of frames when the fire was blown out before cutting to the sunrise. You know, I wanted to do it justice.
“He told me he’d rather swing and miss than do the same thing over again. And so that match cut is indicative of [telling the] audience what we’re going for.”
Another acknowledged influence on the director’s action style are classic MGM musicals or those featuring Fred Astaire. In films like Singing In The Rain or Top Hat the camera stays generally static and in wide shot with minimal edits so the viewer can take in all the dancing brilliance performed by the film’s stars.
“I love Bob Fosse here, one of my huge inspirations,” Stahelski tells Indiewire. “You take Gene Kelly, the old Sunday Morning Sunday Parade or something like that. You watch Fred Astaire do his thing. And if you watch the way we shoot, it’s very simple. The way we train people [to perform stunts] is very, very, very dance oriented.”
Orloff elaborates on what this means to decisions in the cutting room.
“Musicals like back then were sort of like you edited around the dancing,” he says on an episode of The Rough Cut podcast. “You showed them dancing. They would do a move, finish, cut, start something else. And the way Chad talked about that really inspired me to do that with our characters and not use the editing to try to punch anything up.”
There are times when the stunt performance or that of Keanu Reeves aren’t quite perfect “they slip or there’s something not great about the timing of this or that, but not being so obsessive about perfection makes it just so much more real. When you’re cutting less, you’re able to absorb everything more. You feel more empathy for the characters because you feel like you’re just there.”
John Wick: Chapter 4 clocks in at 169 minutes, more than an hour longer than the original. Stahelski explains why he wanted a movie of this length.
“In our heads we knew that we wanted to show this constant decreasing circle that spirals closer and closer as [the stories] come together. So every act brings us closer together. That was the plan. It sounds like a very genius plan, but you don’t know until you cut the whole thing together. Our first cut was 3 hours 45 minutes.”
So how did the edit team set about cutting that down, and knowing which killing to leave in or excise?
“When you have 14 action sequences, you can’t just edit that sequence,” the director explained. “You’ll never know if a five minute car scene or a ten minute car scene is good to watch in the two and a half hour movie.
So the only way to truly know that you’re doing the right thing is step back and take that half day. We’d edit all morning but by four p.m. we’re like, Let’s watch the movie. And my editorial staff probably hates me. We’ve watched it so many times because literally even if we just took like 30 seconds out of something, I’d make everybody watch the movie again, because that’s the only way you know, you have the right pace.
He adds, “It’s the whole song that makes you rock out. I think that was a big learning experience with me and my editorial team to constantly watch a two and five hour movie and feel where the slow parts were and to work on those parts.”
Because John Wick is dispatching henchmen left and right in intricately planned and executed stunts, making the decision about what to cut was a tricky one admits the editor.
“There is definitely sometimes overkill when something is too similar to something else,” Orloff told Next Best Picture, “but going back to the music was a huge help in creating different tones and alternating what we were doing to avoid the things feeling the same. And to Chad’s credit, especially in the last act when we go from street fight to a car chase to a lengthy overhead shot that, even though the audience has watched non-stop action for 30-45 minutes the movie is structured so skillfully that you’re seeing something you’ve never seen before.”
“The Last Jedi” and “Glass Onion” director Rian Johnson on his new Peacock “howcatchem” series, “Poker Face,” starring Natasha Lyonne.
April 5, 2023
Posted
April 4, 2023
Step Into the Ring: Kramer Morgenthau’s Cinematography for “Creed III”
TL;DR
Even though the “Creed” movies are part of a expanded “Rocky” cinematic universe, this is the first in nine films that doesn’t have the original character as a part of the plot.
Director and star Michael B. Jordan collaborated with “Creed II” cinematographer Kramer Morgenthau to reinvent how boxing scenes are shot.
The filmmakers aimed for a heightened visual style influenced by Japanese anime, including what they called “Adonis vision,” a subjective POV from Adonis Creed as he’s clocking each fight.
“Creed III” was shot in IMAX format with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics.
Like the story arc of the majority of boxing movies, Creed III had a number of challenges to overcome in its production journey. Firstly, even though the Creed movies were part of a expanded Rocky cinematic universe, this was the first in nine films that didn’t have the original character as a part of the plot; Rocky had left the story.
Flipping this negative into the positive feel of a new start gave newbie director and still star Michael B. Jordan a chance to reinvent how to shoot the boxing scenes in particular. An easy reference, subconsciously or not, was Scorsese’s Raging Bull as its fighting is stylistically different from everything else.
Also, a new POV suited the storyline of a fight between a retired Adonis Creed and a significant person reappearing from his past with major issues to resolve.
Previous Creed II cinematographer Kramer Morgenthau and Jordan laid plans for a new “in ring” aesthetic as Max Weinstein explains in American Cinematographer. “Settling into his duties as a director, Jordan determined early in prep that he and Morgenthau would need to take two ‘big artistic swings’ to fully engross audiences in Donnie’s next chapter.”
The intention was to aim for a heightened visual style. “Michael is hugely influenced by Japanese anime — that’s completely his stamp on the movie. So, he brought that into the way we cover the fights,” Morgenthau says. “There’s this thing we call ‘Adonis Vision,’ where you’re seeing subjective point-of-view from Adonis as he’s clocking each fight, and that plays out in an anime style, with these hyper-real close-ups.”
For that, they switched to very wide-angle lenses, a 12mm Panavision H Series and a 14mm VA. “That again was part of Michael’s vision from the beginning. It’s very much an anime approach.”
But the action in general had to seen from the inside, not the outside, which is the problem for most sports action movies.
The Panavision website described how the boxing was shot within reach of the fighters. On both Creed II and III, Morgenthau was joined in his corner by A-camera and Steadicam operator, Michael Heathcote. “Mike and I came in early during prep to work with [2nd-unit director and supervising stunt coordinator] Clayton Barber and [assistant stunt coordinator] Eric Brown to help design the moves for the fight choreography. There’s an arc to what happens in the fights and the stories happening in the corners and in the ringside seats. That was all carefully choreographed, like shooting a piece of dance.”
Working with Panavision Atlanta, Morgenthau chose to shoot Creed III with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics. “We shot all the dramatic scenes with T Series and C Series anamorphic lenses, and for the fights, which are in the 1.90:1 aspect ratio for IMAX, we used [prototype] spherical VA primes that we customized to add a bit more softness and help them match the look of our anamorphic lenses,” the cinematographer explains.
Morgenthau also shot certain sections of Donnie’s bouts with the Phantom Flex4K, whose high-speed capabilities enabled him to create an “ultra-slow-motion analysis of some of the major moments in the fights, where we wanted to be inside the boxers’ heads.”
Other cameras used included prep cameras to rehearse moves, “We prepped by shooting each fight with small digital cameras, and shooting sketches of what it should be, figuring out the most impactful places to place a camera and trying to show what it’s like to be in the ring from a boxer’s perspective.”
The other big “artistic swing” was the unveiling of a new taller aspect ratio to give the fighters almost god-like statue. “In the film’s dramatic scenes, intimate glimpses of Donnie’s and Dame’s out-of-the-ring lives are framed for the 2.39:1 aspect ratio, but whenever a match is underway, the frame is expanded to 1.90:1 Imax. The filmmakers opted to shoot most footage for both aspect ratios with Sony Venice cameras certified by the ‘Filmed for Imax’ program,” Weinstein notes in American Cinematographer.
With up to 26% more picture, this third installment in the Creed franchise became the first sports-based film included in the “Filmed for IMAX” program.
“It was really exciting to be able to integrate the IMAX cameras into the filmmaking process, especially the way we used them to open the world up and to make it very immersive and visceral for the flight sequences,” says Morgenthau, according to a report by ReelAdvice. “And that’s how we chose to use it; there was just something very magical, especially the scene at Dodger Stadium, where MBJ is walking out onto the field and the image aspect ratio expands in shot and the black bars recede, and you get this really tall, beautiful, powerful image. It just elevates everything, there is just something hyperreal about it. And to be the first sports movie doing that, it was a creative high.”
Director and star Jordan, speaking to American Cinematographer, says “We were looking at these old photos of Muhammad Ali by Neil Leifer, and [we called] the shots that he would get of these outdoor fights ‘clouds to the canvas,’ where you can see everything in the frame. So, we just wanted to recapture that — get all that information up on the screen. Then, we’d ask, ‘Okay, when is it going to open up? When is it going to transition into that ratio?’ It was about picking those moments and balancing them.”
Morgenthau concluded with almost reverence for the sport and the fighters in an interview with Gary M. Kramer for Salon. “The way we photographed the bodies was like photographing sculpture. Their bodies are sculpted and beautiful, and covered in sweat and oil and very reflective. Shooting them was about how their bodies and faces were reflecting light and honoring their performances was showing them in their ‘best light,’ so to speak,” he said.
“I studied paintings by George Bellows, and the Ashcan school of painting was an inspiration. There was an Eakins painting in a museum in Philadelphia that I was looking at, and I referenced great boxing photography, like some of the Ali color photographs. These images inspired how we lit the boxers.”
How “The Boy, The Mole, The Fox and The Horse” Won Hearts and Minds
TL;DR
Based on the bestselling illustrated book by Charlie Mackesy, the Oscar-winning animated short film “The Boy, The Mole, The Fox and The Horse” has been described as “‘The Little Prince’ for a new generation.”
The international animation team that brought the film to life spanned 20 different countries, with artists working remotely due to the pandemic.
The filmmakers wanted to retain the signature style of Mackesy’s ink and watercolor illustrations, with Mackesy closely involved in the process to ensure that the film stayed true to his vision.
The Oscar-winning short The Boy, The Mole, The Fox and The Horse is like receiving an “emotion bomb” when you first see it. If you have any pent-up sentiment left over from the pandemic, Charlie Mackesy’s animated story of a young boy and his animal friends might extract it from you, so be warned.
The award-winning animated story, now streaming on Apple TV+ and the BBC iPlayer, is the realization of Mackesy’s beautifully rendered ink and watercolor drawings, which were immortalized into an illustrated book that ended up topping the bestseller lists in both the United States and in the UK.
Filmmakers then approached Mackesy to take the story to the next level, but how do you turn heavily characterized pencil drawings into moving images and keep the signature style of the artist?
Initially, Mackesy’s intentions were less about the bottom line than more spiritual and Christian ambitions. He explained to Ryan Fleming at Deadline that helping people was his driving force and he thought the film would add to that.
That the book even became a hit shocked him, Mackesy said. “When the book came out, I got so many emails, like thousands of emails, telling me how the book had moved them or helped them, particularly in the pandemic,” he said. “I felt like if the book had done that, could a film reach people in the same way?”
He soon had his answer. After reading Mackesy’s book in 2019, producer Cara Speller said she “completely fell in love with it and got in touch with Charlie and his partner, Matthew Freud, and talked to them about what we could potentially do in turning it into a short film.” After a discussion with the creators, Speller contacted Peter Baynton, who was ready to join as director.
Speller told Jérémie Noyer at Animated Views how important it was to have Mackesy front and center in the process. “It was always really important to me right from the start that Charlie be at the center of any team that we put together to make the film. You can tell immediately from the book that he has incredibly strong instincts about what works. To me, it didn’t make any sense to try and make that without having him so closely involved.”
The animation team worked remotely because of COVID, with a shared goal of creating a look that reflected as closely as possible the drawings in the book, which were ink and watercolor. “We wanted to make those drawings basically move but keep the spirit of the fluidity of the ink and the line and the varying thickness of line,” Mackesy says.
Director Peter Bayton underlines the connection between Mackesy’s style and his animation team, “Charlie’s drawing is underpinned by a great knowledge of anatomy. So, even though he draws extremely quickly and quite impressionistically, you can tell he knows horse or boy or fox anatomy so well. For the mole, it’s a little bit different.
“It was important not to lose that lovely loose quality and make things stiff. So, we came down to a system where we would animate quite tightly on detailed models, and then, on the ink stage, we encouraged the artists to find that looser way of inking. It was about finding that very fine line that sort of drifts around the characters.”
“It was a very international crew,” noted Speller, “coming from 20 different countries. We started the work on the film in the middle of the Pandemic, so everyone was working remotely from their homes. We built the team in the same way you build any team on a production. You’re always looking for the most talented artists you can find; it doesn’t matter where they are in the world, as long as you think they’re the right fit for the project and for the team.”
“The style warrants movement,” said Gladstone, “but how did you achieve it? The line halo that goes around the drawings, how is that translated to movement?” Director Pete Bayton explained the significance of the halos: “Charlie describes those lines as thinking lines and they’re very characteristic of his drawings,” he noted.
“The process is that we start with pencil rough character animation, to define the performance and then it goes through a clean-up stage where we adjust the model where everything looks like a model and then we go to an inking stage where we do these key ink drawings and at that point we would add these lines, these thinking lines or ‘thinkies,’” he continued.
“It was a careful balance as sometimes that would feel too stiff and attached like a wire so we found a way of making them dissipate and reappear.”
Art director Mike McCain summed up Mackesy’s style and how it was transferred to motion. “Charlie has such a beautiful economy with ink and the book has such a minimal approach to storytelling and it’s just what’s needed on the page,” he said. “As we were looking to bring that wilderness to life the biggest challenges were finding how to add and not to over add. Just put what’s needed on screen to make it feel like you’re surrounded by this world.”
Variety’s Peter Debruge calls the short “The Little Prince for a new generation.” He goes on to add, “Beautifully adapted from British illustrator Charlie Mackesy’s international bestseller. Those who know the book — a Jonathan Livingston Seagull-esque life preserver for many during the pandemic — will appreciate how the team managed to translate Mackesy’s unique ink-and-watercolor style, with its distinctive blend of thick brushstrokes and loose, unfinished lines.
“Isobel Waller-Bridge’s gentle score coaxes audiences into a receptive place, while the quartet of Jude Coward Nicoll (the Boy), Tom Hollander (the Mole), Idris Elba (the Fox) and Gabriel Byrne (the Horse) lend sincere voice to various affirmational ideas,” Debruge continues.
“Cynics may dismiss what one acquaintance called its ‘bumper sticker wisdom,’ but they miss how vital it is to plant ideas of this nature in the heads of young viewers: boosting their confidence and unpacking what it means to feel lost — or seen — before social media brainwashes them otherwise.”
The 2023 NAB Show will host a conversation with the creative team behind short film “The Boy, the Mole, the Fox, and the Horse.”
March 31, 2023
Dive Into Data at NAB Show
TL;DR
The data you have is always the best data. Learning to understand, enrich, and leverage it is how companies will succeed in 2023 and beyond.
Empowered by regulation and education, consumers are increasingly cognizant of their privacy rights, even as individuals and companies rely more heavily on data. Ethical and transparent practices will continue to grow in importance.
The entertainment business remains relatively old-fashioned but is seeking to understand audience behavior in ways that enable better monetization and loyalty.
Learn more by signing up for the Data Data Data exhibit floor tour at NAB Show in Las Vegas, April 16-18.
Bryndan Moore, host of The Black Futurist podcast, recently interviewed Arisha Smith about how data and analytics are reshaping the media landscape and how entertainment companies are playing catch-up.
Smith is chief growth officer for ethical data company Streamlytics, as well as founder and managing partner at Idyllic Interactive.
Moore and Smith also preview the Data Data Data tour at this April’s NAB Show, which Moore will guide. Smith will also speak on the same subject for a panel in the Create track.
Stick to the Basics
With all the recent fuss over data’s importance, you might be surprised to learn that Smith thinks it hasn’t really changed all that much in the past decade. Methodologies may be different, but the core principles remain.
“You would be surprised,” Smith says. “At the end of the day, all platforms are pretty much just leveraging the information they have. And it really comes down to — which is crazy — email addresses, most of the time, because that is the first touchpoint or the only pixel or digital touchpoint they have.”
Yes, email addresses are still used to “tie together all of the insights and all the behavioral trends to know a little bit more about the people that they want to serve.”
Smith got her start working at a syndicated radio station. She recalls how the station leveraged its e-newsletter to gather information about audience behavior, listening to the audio or reading content on their website. Smith says omnichannel advertising has evolved out of these very same techniques.
“As long as you have one ‘unifier’ [data point]… there’s many pieces on which you can operate.” Flash forward to 2023, there’s a general understanding that companies need this information, but Smith notes that there’s still a knowledge gap when it comes to analytics and how to appropriately leverage scale to ensure growth and monetization. That’s where specialists can come in.
But there’s a bit of a catch. Media companies have long relied on third-party insights, reviewing Nielsen or Comscore metrics in their analysis.
Smith says you should actually be looking for “zero-party data,” essentially the “best, highest, richest, most qualified data” accessible when analyzing potential customers or current audience. “Start with the information you have. [It] is the richest, is the most informative.”
Use the email address to get additional demographic information, perhaps via a survey, and build out your data from there. First, she suggests to “learn more about… your best sellers or your best customers. Once you know more about them, once you know where you are, you’ll understand where you need to go.”
At this point, you may want to supplement with second- or third-party data, leveraging a brand that already has information on its customers to find connective tissue or “the synergies between our customers and your customers.”
Data and Privacy
Smith acknowledges that those on the consumer side may not always be comfortable with how much companies can learn about them via data aggregation and enrichment.
Not surprisingly, Smith sees ML and personalization as a net benefit to her and her family, creating efficiencies that are worth the trade off.
“But many people are conscientious about how their data is being utilized and their concerns about ownership of the data. And they’re also concerned about the monetization of their data,” Smith acknowledges.
To that end, she encourages individuals to invest time in reading privacy policies. She also says regulators are strict about the enforcement of data protection and ensuring that this information is shared in a way that everyone can understand.
Second, GDPR and CPA protections should also put consumers at ease, Smith says. In the US, individual states govern data policy, but the rules are generally the same.
If you’re still uncomfortable, Smith suggests you request to review your data from the platform or app. Companies are also required to delete your data if you request it. But the first step — finding out what they’re tracking — can be personally revealing, Smith says.
Once you download your data file, “you might find out something about yourself” after parsing the Excel spreadsheet.
Smith’s company, Streamlytics, also offers a “consumer facing platforms where people can upload their data file and view it in a easier in an easier to read way.” Hers is one of several tools to help consumers analyze their own data to make informed decisions about privacy.
“Use your best judgment,” Smith advises.
Cookie Depreciation and Other Challenges
Regulatory changes and privacy-driven consumers have come alongside the “depreciation” of internet cookies to challenge the effectiveness of modern campaigns, Smith acknowledges.
iOS 14’s data opt-out prompted companies to turn to omnichannel advertising campaigns, relying less on campaigns in which effectiveness could be measured by a click through rate to determine ROI.
Awareness is still a key goal for advertisers, but of course, revenue growth remains the ultimate goal, with data being able to connect the ad campaign through to an eventual purchase (at least based on some educated guesses).
“It’s a long game,” Smith says. “It’ll be interesting to see as the cookie depreciates what measurement] looks like moving forward.”
So, Who’s Doing This Well?
Consumer packaged goods companies are at the forefront of consumer analytics, functioning as data companies as much or more so than retailers. On the other hand, the entertainment industry “is looking to understand it.” The M&E cruise ship is still turning, Smith believes.
The pandemic and the closure of movie theatres prompted many companies to realize that they couldn’t rely on box office numbers, but instead had to interpret information from the streaming platforms.
New production houses are leading the data-centric changes, and, predictably, internet-native companies like Netflix and Amazon Prime are also ahead of the game in this space.
You can see it in content recommendations, as well as content acquisition strategies. Smith points out that neither necessarily has to take place in house, but it will require a level of investment in the technology and capital to be effective.
Explore Data at NAB Show
If you want an efficient way to learn about “intelligent” data’s impact on M&E, purchase the Data Data Data tour pass and reserve your preferred slot when you register for NAB Show.
This tour focuses on analytics, data and metadata management, from content creation to post-production management through distribution. There will also be an emphasis on AI and machine learning solutions.
Moore’s two-hour tours are scheduled Sunday through Tuesday, offering both morning and early afternoon starts.
Additionally, you can hear Arisha Smith speak as a panelist for the Data Data Data panel, part of the Create track, scheduled for Monday, April 17 at 3 p.m. (PT).
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
Lori H. Schwartz recently interviewed Christina Heller about immersive content, Web3, and the tech and trends that are making it more accessible for creators and consumers.
March 29, 2023
The Future of Production Amplified: Defining a New Production Ecosystem with MovieLabs
TL;DR
Mark Turner, program director of production technology at MovieLabs, discusses the principles behind the MovieLabs 2030 Vision Initiative and how it aims to transform motion picture production.
Turner believes that moving all aspects of production, including scripts and metadata, to the cloud is necessary to remove inefficiencies and increase scalability. He also identifies the movement of media assets as one of the biggest problems with current production workflows.
The Common Security Architecture for Production (CSAP) is a “Zero Trust” approach to managing assets and workflows that forms a foundational part of the 2030 Vision.
Software-defined workflows enable flexible, dynamic workflows that can be modified mid-stream to make productions more efficient.
Videos in “The Future of Production Amplified” Series:
As part of our series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with Mark Turner, Program Director of Production Technology at MovieLabs, about the MovieLabs 2030 Vision Initiative and what it means for the future of motion picture production.
Turner believes that the current production ecosystem is plagued by inefficiencies, with one of the biggest being the constant movement of media. “I don’t think that’s a revelation,” he says. “We move media way too much. We’ll often move media to applications or to people who have to do a task, and that takes time. It introduces the chance of loss of media, loss of metadata. There’s certainly a risk of security breaches.”
Understanding that cloud-based media is more than storing massive video and audio files is another key point Turner makes. “But actually it’s all parts of the production,” he explains. “It isn’t just the big video files; it’s the script, it’s the metadata that’s attached to every shot. It’s editing, it’s post-production workflows, it’s subbing and dubbing. All of those things need to happen in the cloud if we are going to remove this inefficiency that we have today,” he says.
“And if we can’t make this process more efficient, we will never be able to scale the amount of productions that we can cope with. There’s simply no way to keep doing it with the old hundred-year method that we’ve been using since we started with cinema and shipping film around the place.”
In Part 1 of this exclusive Q&A, Turner provides an overview of the MovieLabs 2030 Vision initiative and explains why he prefers the term “multi-cloud production” to describe our current ecosystem, which often employs multiple cloud platforms — public and private — for a single project. He also discusses why the movement of media assets is one of the biggest problems he’s seeing with current production workflows, and how Marvel Studios developed the script for Black Panther: Wakanda Forever in the cloud using the ProductionPro SaaS platform to centralize and coordinate assets across multiple production departments.
Watch Part 1 below:
In Part 2, Turner tackles security, explaining why it’s such a massive challenge for studios and the larger production ecosystem, and how it forms a foundational part of the 2030 Vision. He also talks about the need for security to “step out of the way” so assets an be shared among vendors, and outlines the Common Security Architecture for Production, or CSAP, which uses a “Zero Trust” approach to managing assets and workflows.
Watch Part 2 below:
In Part 3, Turner dives into software-defined workflows, which enable flexible, dynamic workflows that embrace change and can be modified mid-stream to make productions dramatically more efficient. “Right now, any change in a workflow is highly disruptive,” he says. “We want workflows to be able to adapt in real time, and that does open up some new challenges, but a huge amount of opportunity.” He also discusses how Skywalker Sound’s Coda platform leverages the cloud to automate soundtrack mastering, breaking down multiple processes into “microservices” to output deliverables in minutes rather than days.
Watch Part 3 below:
Connect with Mark Turner on LinkedIn, and learn more about the MovieLabs 2030 Vision initiative at movielabs.com. You can also follow MovieLabs on LinkedIn, Twitter and YouTube.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
The ASWF’s David Morin explains why the development of a strong software engineering community is critical to the future of film production.
March 29, 2023
“Countdown to NAB Show” Live Stream
KitPlus TV and NAB Amplify hosted a live stream that features more than 100 NAB Show exhibitors and participating partners. Watch now for exclusive updates, industry news, and previews of new technologies.
Add the Amplify+ VIP package to your NAB Show registration for only $99 and gain access to an exclusive community online and at the Show. Plus, you’ll enjoy year-round networking, education, content and Show perks so you’re always in the know at and between Shows.
For $99, you’ll gain access to expedited VIP entry at the Main Stage theater, access to the NAB Amplify+ VIP Lounge and Wi-Fi Hot Spot located in Central Hall lobby, and an exclusive invite to the NAB Amplify+ VIP Party on Monday, April 17 at Aria Jewel with expedited VIP entry starting at 10:30 p.m.
Beginning May 2, you’ll also find a full article in your email inbox each Tuesday offering exclusive NAB Show conversations, sessions and recaps. You’ll also be granted VIP perks when you register for NAB Show New York, October 25 – 26, 2023.
NAB Show is the preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry.
March 28, 2023
Discover New Production Modalities at NAB Show
TL;DR
Virtual production should be seen as a bridge leading both creators and consumers on to a future of immersive content and multimedia experiences.
Content creators have been able to take tools and workflows developed to create virtual and augmented reality and apply them to traditional media formats.
Younger content creators see content, tools and workflows in less siloed ways than their forebears did. Gaming, film, social media, and TV are all influencing each other.
Learn more by signing up for the New Production Modalities exhibit floor tour at NAB Show in Las Vegas, April 16-18.
Lori H. Schwartz, Storytech principal and NAB Amplify content partner, recently interviewed Christina Heller about immersive content, Web3, and the tech and trends that are making it more accessible for creators and consumers.
Schwartz and Heller also preview the New Production Modalities tour at this April’s NAB Show, which Heller will be leading. Additionally, Heller is slated to speak on the same subject for a panel in the Create track.
Heller is founder and CEO of volumetric capture company Metastage. Its LA studio is kitted out with 106 cameras. Her company has offered volumetric capture for projects varying from fashion to music to sports training to enterprise.
Prior that, Heller founded VR Playhouse, a virtual reality content company. Heller is an expert on immersive video and has published on volumetric video and augmented reality and has a background in documentary filmmaking, multimedia content, and journalism.
The Virtualization of Content
“Immersive technology is the most multimedia collaborative form of creation, I think, that’s ever existed,” Heller explains, because it pulls from so many diverse backgrounds and skillsets “to create an experience.”
Ultimately, “Content is where it all comes together, right? Hardware, software, financing, distribution, audiences, UX, UI, all of it. So if you work in content, you have visibility into the entire ecosystem,” Heller says.
In her role at Metastage, Heller says she’ll be asked to take on projects for “an idea that I’ve never even considered before. And then as I’m learning about how we can execute that idea, I’m now becoming aware of, you know, the latest and greatest of tools and technology to do things like that.”
Content creators have been able to take tools and workflows developed to create virtual and augmented reality and apply them to traditional media formats. Heller says, “Virtual production pipelines and workflows… that’s been a really exciting development of the last two years” as they come into their full potential.
Significantly, these tools have “never been more accessible. They’ve never been more powerful. Things that really felt obscure, like Unity and Unreal [Engine], like maybe six years ago, now there’s free classes, there’s plenty of online resources” for those who want to maximize their potential, Heller says.
And that contributes to the way that the “current generation of filmmakers … don’t see as much of a delineation between games and and TV and media. They realize that there are many, many, ways to tell stories, [and] that a lot of the tools to do it overlap.”
In particular, “virtual production seems to be this nice bridge” to “our true immersive future,” she says. Noting, “We’re already seeing, for instance, the impact of video games on cinema, and again, immersive technologies right there… in the middle of it all.”
For those interested in working in this space, Heller suggests first, “knowing yourself and your interest.” She considers herself to be a generalist, with a breadth of knowledge and the ability to connect the dots on projects.
Do “you want to be someone that knows a little bit of everything, or do you want to be someone that really develops an expertise in this one area?”
“[T]here are people who are experts at lighting for game engines, and there are people who are experts at spatial audio.” And they don’t just work in video game development anymore.
Using Immersive Tech in New Ways
For example, the content from “volumetric capture can also be repurposed a number of different ways,” notes Schwartz.
“Everybody is coming to talk to us about volumetric capture, LED walls, real time game engine technology for their existing IP. How can we how can we leverage it to make our workflows cheaper or faster, better or more effective?” Heller explains.
“And then because those are the same tools that you then use to create virtual and augmented reality experiences, it’s a no brainer to add those components to whatever campaign you’re building.”
A 360 capture, sometimes called a hologram, can be used in “an augmented reality application where we could place her anywhere, you know, on the streets of New York or on my kitchen table,” Heller explains. Alternatively, it can be utilized in “a virtual environment. And maybe if I’m in a VR headset, I’m standing right in front of you and looking at you as if you are standing right in front of me.” Or they might “create a cinematic moment that could be then used in an advertising campaign for TikTok or for Instagram.”
“I’ve been hugely inspired by some of the work that’s come through. I mean, we just had somebody do the first ever volumetric capture TikTok filter, and it was for Bob the Drag Queen, from RuPaul’s Drag Race.”
What’s Next?
Some observers have expressed concern over immersive tech’s near-term future as some Silicon Valley companies have discontinued or defunded programs. But Heller remains optimistic.
“Despite the shakeups of the last six months in the tech sector, we still see, you know, a tremendous amount of interest in in this space.”
In fact, she says, “I wouldn’t be surprised if we see a lot of new companies popping up, a lot of startups.” Heller predicts, “Some of the forces that have made a commitment to immersive tech” may ultimately be buttressed by “all the new talent that’s out there” and newly available in the marketplace.
She’s bullish because “The desire to see immersive technology come to fruition, truly manifest in a more broad way is powerful. And there are enough players in the game, both big players and also independent creators, that are eager to be a part of that, that when a program goes away, it doesn’t stop the current of the full vision that I think is coming to life.”
And Heller is not the only one. She cites the Producers Guild of America as another group with an eye on this work. Their 34th annual PGA Innovation Awards nominees had a strong showing for her sector. Heller estimates “two thirds of the projects are some form of immersive project.”
If you want an efficient way to learn about the latest production modalities, purchase the New Production Modalities tour pass and reserve your preferred slot when you register for NAB Show. Heller’s two-hour tours are scheduled Sunday through Tuesday, starting at 10 and 10:30 a.m. (PT).
Add the Amplify+ VIP package to your NAB Show registration for only $99 and gain access to an exclusive community online and at the Show. Plus, you’ll enjoy year-round networking, education, content and Show perks so you’re always in the know at and between Shows.
For $99, you’ll gain access to expedited VIP entry at the Main Stage theater, access to the NAB Amplify+ VIP Lounge and Wi-Fi Hot Spot located in Central Hall lobby, and an exclusive invite to the NAB Amplify+ VIP Party on Monday, April 17 at Aria Jewel with expedited VIP entry starting at 10:30 p.m.
Beginning May 2, you’ll also find a full article in your email inbox each Tuesday offering exclusive NAB Show conversations, sessions and recaps. You’ll also be granted VIP perks when you register for NAB Show New York, October 25 – 26, 2023.
NAB Show tours for groups and individuals will run from April 16 through April 18 at the Las Vegas Convention Center.
March 28, 2023
Explore the Evolution of Video at NAB Show
TL;DR
Video is continuing the more trend: more content, more ways to watch it, more ways to edit it, more ways to distribute and monetize it.
Linear TV is learning from streaming and FAST channels; and the new kids on the video block will seek to deploy the best practices developed by more established competitors.
Learn more by signing up for the Evolution of Video exhibit floor tour at NAB Show in Las Vegas, April 16-18.
Lori H. Schwartz, technology catalyst and NAB Amplify content partner, recently interviewed Tim Hanlon about the future of the video ecosystem, as well as one way to experience its ongoing transformation at the 2023 NAB Show: the Evolution of Video tour, which he will lead.
Hanlon is founder and CEO of strategic consulting and advisory firm Vertere Group. He offers “strategic and operational expertise” with an emphasis on the intersection where “TV-meets-digital-video.” He also hosts and produces the sports history/pop culture podcast “Good Seats Still Available.”
Watch the video, above, or read on for a summary of their conversation.
The State of Video in 2023
First of all, there’s legacy, linear video, which is “exponentially expanding and compressing and becoming of higher quality,” Hanlon says. These trends are enabled by collaborative technologies that enable in the cloud creating and editing on-the-fly. In some cases, live remote work can even happen without a truck.
“Linear is itself evolving and changing and getting more sophisticated,” Hanlon says.
On the other hand, consider “the explosion of streaming,” which has on-demand content and user-generated content as two of its hallmarks that have traditionally differentiated this model from linear video offerings. That’s where FAST channels have come into the mix, disrupting our conception of “television channels.”
Hanlon predicts this year will be one in which more traditional TV producers, distributors, and channel owners grapple with how to take advantage of free ad-supported streaming video content.
The inverse will also likely be true. How can streamers benefit from the FAST way of doing things, or the linear television model?
“How does a streamer that’s never been in the ‘television’ realm challenge, disrupt itself, compete? How did the two harmonize or not? All of those things are really literally up in the air.
“Individually, television and streaming video, they’re exponentially changing and evolving very, very quickly. But then they’re also colliding towards each other,” Hanlon says, noting that these shifts are making “a lot of people uncomfortable” as we are in an interim place, waiting to see who the winners and losers will be.
At NAB Show, Hanlon suggests, we’ll get a preview of how more of this may work in practice. As he puts it, “Why has the potential to become real — and what the heck do I need to do about it?”
Short-form and Social Video
The significance of snackable content is one trend that’s withstood the test of a few shows. From Hanlon’s perspective, the main takeaway is not that this content is briefer, but the “democratization of creation and distribution” that goes hand-in-hand with TikToks and Reels.
In 2023, video professionals may debate the best format to shoot and/or share their subject matter, considering options that were almost unthinkable 15 years ago.
Varying wildly in quality and form, “the army of everybody” and their smartphones and handheld cameras have joined an ecosystem that in 2018 was “the exclusive domain of professionals and trucks and cables and satellite feeds.” Today, smartphones and apps can enable individuals to “literally create, produce, edit, disseminate, react to and live and breathe further into the ecosystem.”
And some of the smartphone video can be reconstituted for linear TV, borrowing from Vin Di Bona’s “America’s Funniest Home Videos” format, substituting downloaded files for mailed VHS tapes. But it’s also available to watch in real time, perhaps without editing, before a pro can consider repackaging it.
These developments are further complicating the video and TV ecosystem.
“It’s hugely fascinating, but it clearly upsets a lot of traditional business models along the way and creates new ones along the way,” Hanlon says.
He cautions that none of this is “zero sum. It’s kind of mutually evolutionary, right? In that short form, long form, premium, user-generated, on-demand, linear streaming, everything in between… Put that all in one big gigantic brown bag, shake it up vigorously and discuss and evolve and create and take advantage of” it all.
Finding — and Applying — the Right Tech
At this year’s NAB Show, Hanlon expects to simplify some of these trends, or break the individual technologies into their elemental parts in discussions.
“It’s creating content at Point A and distributing it, ultimately, to somebody in some economically advantageous way in various Points B,” he says.
Breaking it down further, you can look at data and measurement, signal monitoring and much more.
“Why has the potential to become real — and what the heck do I need to do about it?”
Tim Hanlon, Evolution of Video tour guide
What else will be top of mind? Hanlon thinks manner of transmission will be a big topic, with 5G and ATSC 3.0 staying relevant and buzzy in 2023. He’s a big NextGen TV believer, and this year’s show floor features a whole section dedicated to showcasing the technology in the new Broadcast District, located in the West Hall of the LVCC.
If you want an efficient way to learn about the latest video-centric trends, purchase a pass and reserve your preferred slot for the Evolution of Video tour when you register for NAB Show. Hanlon’s two-hour tours are scheduled Sunday through Tuesday of NAB Show, starting at 1 p.m. (PT).
Add the Amplify+ VIP package to your NAB Show registration for only $99 and gain access to an exclusive community online and at the Show. Plus, you’ll enjoy year-round networking, education, content and Show perks so you’re always in the know at and between Shows.
For $99, you’ll gain access to expedited VIP entry at the Main Stage theater, access to the NAB Amplify+ VIP Lounge and Wi-Fi Hot Spot located in Central Hall lobby, and an exclusive invite to the NAB Amplify+ VIP Party on Monday, April 17 at Aria Jewel with expedited VIP entry starting at 10:30 p.m.
Beginning May 2, you’ll also find a full article in your email inbox each Tuesday offering exclusive NAB Show conversations, sessions and recaps. You’ll also be granted VIP perks when you register for NAB Show New York, October 25 – 26, 2023.
NAB Show tours for groups and individuals will run from April 16 through April 18 at the Las Vegas Convention Center.
March 23, 2023
Posted
March 20, 2023
Destination Cloud: Here’s How Live Broadcast Can Get There
TL;DR
There’s no turning back the clock on remote distributed workflows. The pressures to cut costs point in only one direction: the cloud. Can live production workflows be developed so that broadcasters access all the benefits of cloud with no drawbacks in quality?
SMPTE and Portland company Port 9 think so and aim to sensitize industry opinion so that they can become aware that something like this is possible.
Applying broadcast standard frame-accurate timing in the cloud with minimal latency remains a challenge but not an insoluble one.
SMPTE’s master plan for moving broadcast production into the age of IP didn’t anticipate COVID (how could it?) and the cracks are beginning to show.
The pandemic has made such a time jump on the use of remote distributed video over the internet and cloud-based technology that grounding live broadcast studio workflows in specifications that mirror the gold standards of SDI may outlive its usefulness even before the industry fully transitions.
It was long thought that it would be at best compromised if not downright impossible for live programming to be produced in the cloud, yet this is what some corners of the industry, including SMPTE itself, are contemplating.
One of the innovators pioneering this change is Mike Coleman, a veteran broadcast engineer and co-founder of Portland, Oregon-based Port 9 Labs.
In an excellent blog post written by Michael Goldman for SMPTE and in a public demonstration of Port 9’s proposed cloud-based live switching technology, Coleman explains how SMPTE is working to develop new architectures for live remote video broadcasting — in the cloud.
He argues that the industry has by necessity begun moving parts of its production equation to the cloud, but that this is to a large extent piecemeal.
“If you examine how a cloud-native service would be built, it would be radically different than the architectures you are seeing for these lift-and-shift kinds of things. In other words, for now, there is a big disconnect,” he says.
Coleman admits it is still early in the company’s development of its own cloud-native architecture for doing production in the cloud, and that the industry will be slow out of necessity to evolve in that direction generally, but he nevertheless believes such a transition is inevitable.
“Right now, we have lots of lift-and-shift going on,” he explains. “That means people are moving existing ground-based solutions into the cloud. Since the [pandemic], people have been under a lot of pressure to take what they already have on the ground and incrementally change it to somehow make it work in the cloud. But they are starting to realize their limitations, and the industry is starting to understand it needs to adapt.”
Coleman believes it is now possible to build IP-based media systems that can be used via public cloud services and says his company has had success moving uncompressed video on multiple public cloud systems using multi-flow UDP (User Datagram Protocol).
“Cloud IP media services would be managed as SaaS [software as services]. Broadcasters would control the programming from anywhere they choose, but the underlying service will be maintained by the service provider,” is how he and his colleagues describe it in a separate article written for the SMPTE Motion Imaging Journal.
“It’s definitely an over-the-horizon thing and will likely take many years to get there,” Colman says. “But, in our opinion, cloud architecture, if done correctly, would be totally different from how things are done on the ground, since the whole point obviously would be to leverage the strengths of the cloud.”
A number of critical issues need to be addressed. They include addressing broadcasters chief concerns about quality — of image and of synchronization both of which are fundamental to the SMPTE 2110 family of standards.
Coleman says it should be possible to maintain quality by working with compressed media in the cloud and effectively only using uncompressed media at the point of transmission (or perhaps even rendered at display if edge compute is built out).
He picks out NDI — once anathema to broadcast engineer purists — as a robust and proven solution for sharing lightly compressed AV and metadata across IP networks.
“Generally, it is pretty good for its purpose and pretty easy to move up into the cloud, but even so, the video quality isn’t quite up to modern broadcast standards since it still requires 8-bit compressed video,” Coleman says. “Studios typically would prefer to compose video in the highest possible quality and then use compression later only for the transport phase.”
He thinks this is still a hybrid of the “lift and shift” approach and therefore not ideal. A better solution, to Coleman, is Grass Valley’s AMPP, “which is more cloud-native but still kind of in the middle between lift-and-shift and where we think it has to go.”
Coleman says one key to creating a true cloud-native architecture for broadcasters to use when producing live content involves approaching the concept of an IP-based workflow differently by taking an “asynchronous rather than a synchronous approach.”
“Today, in an IP-based studio, like with most IP-based things, you need extremely tight timing,” he explains. “Everything has to be synchronous using the PTP (Precision Timing Protocol) to [synchronize all devices on a computer network]. In the cloud that is really hard to do and we have begun to realize you don’t need to do it, because you typically have a huge amount of bandwidth and tons of CPU available in the cloud [when using a major cloud provider]. So, instead, we want to work in an asynchronous model, only synchronized on the edge if you need it to be.”
He says Port 9 is working on an architecture that works without being synchronous because everything is time-stamped: “We call this having a looser time model so that we can work on uncompressed video in the cloud.”
Another problem is egress — transferring material, and particular data heavy media, out of the cloud. That’s not a problem, per se — but the cost is.
“Cloud providers will charge you a lot of money in terms of data transfer fees,” Coleman says. “Therefore, typically, you do not want to send your uncompressed video back down to your facility on the ground. Our solution for that is to send only proxies down to the ground — that’s where we would use compression. Broadcasters are already very familiar with using proxies in their workflows.”
Among other things, that working group is examining where a common set of requirements for ground-to-cloud and cloud-to-ground data transfers would be necessary, and how best to establish a common technical approach for such data transfers.
Coleman also adds that working group “is embracing the idea of the timing model being looser in the cloud, so in my opinion, they are moving in the right direction by focusing on the data plain, or data transfer area.”
All of this has quite a way to travel before it would ever become ubiquitous in the wider industry. For now, he says the primary initial goal is to simply “sensitize people so that they can become aware that something like this is possible.”
“Broadcasters are still in this process of continuing to try incremental changes to their workflows in order to keep working as they move into the cloud,” he says. “What I’m saying is that an incremental approach won’t ever get you where you want to go. At some point, you have to make a big break. Before they can make that big break, they have to understand how it could work using [a cloud-native process]. I expect there may be a transition period of about five years before broadcasters are really using the cloud the way it ought to be used for live production. But I do think it is inevitable that it will happen.”
Looking to stay ahead of the curve in the fast-changing world of live production? Learn how top companies are pushing the boundaries of what’s possible in live events and discover the cutting-edge tools and technologies for everything from live streaming and remote workflows to augmented reality and 5G broadcasting with these fresh insights from NAB Amplify:
The BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians and media technology managers.
March 17, 2023
Amy Webb: What’s the Future of Media? Personalized, Participatory, and “Push Button”
TL;DR
The latest Future Today Institute Tech Trends report finds that the entertainment industry is at a tipping point, where new technologies are allowing exploration of completely new forms of expression.
Haptics will allow us to engage all of the senses when we watch our favorite characters on screen. Until consumer devices offer such features at scale, this enriched experience can be found at select venues such as the Las Vegas Sphere.
AI will allow for the design of new production processes for deconstructed storytelling.
Virtual reality applications are evolving to a new experience category: free roaming, interactive adventures that can be experienced with others.
Storytelling experiences themselves will move to a collaborative model, where the audience has varying degrees of impact on how the narrative unfolds. This in turn opens up the opportunity for repeated engagement with entertainment franchises.
“The entertainment industry is at a tipping point, where new technologies are allowing exploration of completely new forms of expression,” says the FTI’s chief executive, Amy Webb.
The report itself spans multiple industries and the section on entertainment alone runs 70 pages. NAB Amplify has edited the highlights that the TFI has culled from hundreds of sources, including securities filings, patents, academic research, market research firms, white papers, and the press.
Synthetic Influencers
The influencer economy, estimated to be worth $16 billion last year, is giving creators control over their businesses with the consequence that power is shifting away from social media platforms.
TFI thinks the influencer economy poised to eclipse traditional marketing and advertising channels but that virtual or synthetic influencers are about to muddy the waters.
Some of these computer-generated characters have already amassed social followings in the millions, agency representation, and partnerships with brands but most importantly they are “unencumbered by the demands and limitations of human influencers.”
Remote Revolution
The content production process is being upended in a number of ways, among them the build out of remote and decentralized workflows away from the traditional production hubs.
Instead, talent in regions like New Mexico, Turkey, Australia and Southeast Asia are benefitting from being able to be connected to a production which might be produced in other parts of the world. At the same time, the report thinks there’s a virtuous circle not only in productions being made quicker and cheaper but, paired with global streaming channels, affords a chance to “showcase a greater variety of voices from different cultural and demographic backgrounds.”
Participating in the Story
Spatial audio, volumetric video capture, and haptics will increasingly allow us to hear, feel, and see the action, “transforming us into participants rather than spectators of the events happening on our screens.”
What’s more, as the capabilities of our technical devices expand, consumers don’t just watch their favorite content, they experience the narratives with all — or most — of their senses.
“However, as consumers become accustomed to multisensory engagements, and enabling hardware becomes more accessible, expectations might shift in other areas of entertainment. This provides additional layers for storytelling: What does a location smell like? Where is the sound coming from? Is it windy or hot? Creatives may need to design olfactory, sense, and spatial elements, just as sound and production is designed now,” the report proposes.
Incorporating these aspects in storytelling will also potentially help bring viewers back into the cinema, where the sensory experience can be better controlled and the necessary hardware can be made available.
Customized Content
Stories are evolving from finite products to flexible formats consisting of a variety of modules that can be combined in a near infinite number of ways. AI-assisted writing can adjust plotlines automatically to fit the viewer’s taste profile, based on such data as a person’s past viewing choices, browsing history, and favorite online publications.
The practicality of producing “modular narratives” requires that exponentially more material be shot than with linear storytelling.
Naturally, this inflates costs and production time. It also changes the kind of control that directors, producers and writers can exercise over their product.
“Their work becomes an environment and narrative setup in which a variety of actions can take place — similar to what a game designer provides,” the report suggests.
It also questions whether personalized content-on-demand will touch people in the same way as today’s movies. If everyone consumes different versions of a narrative ecosystem, the foundation for a broader societal discussion shrinks or changes, possibly hindering the exploration of important, controversial topics.
Two-Way Storytelling
We will see more Massive interactive live events, or MILEs, hybrids of TV shows and video games with a storyline that unfolds continuously over several weeks, where viewers can interact with the livestream to influence the action.
Different stories will lend themselves to different degrees of relinquishing control and different forms of consumption, opening up doors for endless experimentation. This new hybrid will also cross-pollinate audiences between gaming and streaming and create new business opportunities for existing titles on both sides.
Another advantage of participatory narratives: What happens will be novel and different each time an experience is launched, keeping the fan community continuously engaged.
AI Voice Dubbing
AI systems can now take a movie’s dialogue and dub it into multiple languages, re-creating actors’ original voices (Val Kilmer’s reunification with Tom Cruise in Top Gun: Maverick was recreated by vocal clone). With synthetic media applications adjusting lip movements to fit the spoken words, authentic localization of content can now be achieved quickly and cost efficiently.
The technology can also amplify the impact of such content: Viewers are able to recall dubbed material much better than content with subtitles.
Push Button Video
Text-to-video solutions enable companies to scale their corporate communication and marketing messages.
While long-form narrative content is far from being produced with a single push of a button, the increasing number of end-to-end solutions, bundling algorithmic voice and image technologies, will be accessible and increasingly utilized by budget-conscious companies, members of the creator economy, and regular consumers. The ease of use and rapidly improving quality of these tools will further heat up competition for viewers’ attention.
Virtual Concerts Take Off
Virtual reality concerts first gained popularity during the pandemic to make up for canceled shows. Now they are evolving into their own category of entertainment, providing more intimacy with performers and new opportunities for smaller acts.
Megan Thee Stallion’s “Enter Thee Hottieverse” tour is just one example of a recent VR experience from a popular artist who can make more money virtually than on a physical tour.
Monetization opportunities include merchandise and experiences. And the gaming environment presents natural crossover potential. As companies explore opportunities to make VR available to smaller bands, those artists will potentially be able to connect with and monetize their audiences without having to go on tour.
Live acts are also freeing themselves from location-specific constraints. Volumetric capture and ubiquitous highspeed connectivity promise to replicate performances in real time to any venue.
Personalized Theme Park Experiences
Existing theme park customer platforms, mobile apps, and wearables provide an ever more optimized and personalized experience to park visitors, thanks to AI and sensor technologies.
For example, in Hamburg, the “Yullbe Wunderland” experience allows participants to “shrink” to miniature size so they can dive into the world of the largest model railroad ever created. Up to six people wander through a 250-square-meter space, each wearing a backpack computer, VR headset, a helmet with infrared sensors, microphone, headphones, and hand and foot trackers. Data from this uniform, as well as from 150 cameras in the room, combines with data from other users to enable collaborative sensory experiences.
VR entertainment experiences use the technology for localized social activities that stimulate all the senses, enabling customers to fully immerse themselves in artificial worlds.
Merging Physical and Virtual Theme Parks
The next frontier is connecting these platforms to data outside the park ecosystem for even greater personalization and user friendliness.
For clues look to Disney+, which announced last October that it would morph into an experiential lifestyle platform that enables data exchange between its park and streaming services, while providing a more personalized experience in both.
Both Universal Studios and Disney filed patents that transmit data about personal preferences from guest wearables to park entities — staff, for example, could communicate accordingly or trigger customized experiences. The two companies also have plans to bring their parks into virtual realms.
“If theme parks fully embrace a presence in the metaverse, it could lay the foundation for an entirely new form of experiencing theme parks, one that’s not bound by real-life limits such as lines, hours of operation, or weather.”
NAB Show will look at how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18.
March 15, 2023
Posted
March 15, 2023
Neal Stephenson on the Climate Crisis: When Science Fiction Becomes Just… Science
TL;DR
Author Neal Stephenson discusses his fear of a looming ecological disaster and the science he thinks could solve it during a recent episode of the Recode Media podcast.
“I think it’s going to be the biggest engineering project in human history,” Stephenson says. “It’s going to transform the world — the built environment — because we simply can’t do it without doing engineering on a massive scale.”
Science fiction pioneer Neal Stephenson says humanity needs to be concentrated on the big issue — climate change. (He has written about this in his technothriller Termination Shock.)
“The only things worth talking about right now are carbon and the fracturing of society by social media,” he told Vox’s Peter Kafka during a recent episode of the Recode Media podcast.
“How do we reduce carbon emissions and remove the hundreds of billions of kilograms of carbon that we’ve already put into the air? I think we’ll beat that problem.
“But I think it’s going to be the biggest engineering project in human history. It’s going to transform the world — the built environment — because we simply can’t do it without doing engineering on a massive scale. I think we’ll succeed at it. But we’ll have some bad times between now and then.
“I think we’ll start to see the kinds of mass casualty events that are described in Kim Stanley Robinson’s book, The Ministry for the Future, where you might see millions of people dying of heat stroke in a certain area over a very short period of time,” he continues.
“When the temperature goes up, the humidity goes up, the power goes out. And when that kind of stuff starts happening — which I sadly think it will in the next decade — it’s going to have incredibly powerful political ramifications.”
Neal Stephenson, the celebrated author of “Snow Crash,” discusses the perverse relationship between personal wealth and climate survival.
March 23, 2023
Posted
March 14, 2023
Blinding Lights: Creating Cinematic Beauty for The Weeknd’s Concert Special
TL;DR
Both nights of The Weeknd’s spectacular 90-minute show at Inglewood’s SoFi Stadium in LA were recorded for an HBO concert special, “The Weeknd: Live at SoFi Stadium,” which is now streaming on HBO Max.
Director Micah Bickham employed roughly 25 Sony Venice cameras outfitted with Angenieux and Canon Cine zoom lenses to capture footage of the live concerts.
The production team partnered with a company called Live Grain to add texture and grain to the concert footage to emulate 35mm film stock.
Last November The Weeknd, aka Abel Makkonen Tesfaye, put on a spectacular 90-minute plus show at Inglewood’s SoFi Stadium in LA. Both nights were recorded for an HBO concert special, The Weeknd: Live at SoFi Stadium, which is now streaming on HBO Max.
It was the last stop of the first leg of the “After Hours til Dawn” tour, and Tesfaye pulled out all the stops to reinforce his performance artist handle but still confound and confuse the critics as to what music genre to place him in.
Direction was by Micah Bickham, whose collaboration with the artist goes back to the Starboy album in 2015. He talked to NAB Amplify about how the show was created, recorded and broadcast.
“My focus with The Weeknd is particularly around live production,” Bickham said. “We have quite the partnership really from the Starboy era around 2015. It’s been a handful of years just to understand the world they’re creating from an album point of view and how that translates in to live.”
SoFi stadium was primarily chosen for the recording as The Weeknd was doing two nights there. Both nights would be recorded and then cut together. “Just thinking how I wanted to shoot it and present it, we had to shoot across the two nights, plus a handful of pickups that we ended up doing. Also being LA, it was just perfect.”
The discussion before the show about how they wanted the concert film to look took quite a few diversions but a cinematic theme was always front and center. “We talked a lot about cinematic integrity. Yes it’s a concert and yes it’s an artist performing these songs but with a world being created and shaped inside of it,” he said.
“We talked about what the DNA and visual language of this film was but in the end for me it had a lot to do with how we presented it more like a film and less like a concert. What I mean by that is when you watch the film the pace and the tension that the pace creates is pretty unusual for a typical concert film.
“We wanted you to sit with the artist and digest what was happening right in front of you not through an edit and cut that might pull you away too quickly. We wanted you to live in it, when you see it there’s something that resonates differently than a typical concert film.”
The Weeknd’s live shows have already made headlines, especially his 2021 Super Bowl halftime show, which he had reportedly underwritten to the tune of several million dollars. But the show was to be nominated for an Emmy for Outstanding Variety Special (Live), Outstanding Lighting Design/Lighting Direction for a Variety Special, and Outstanding Technical Direction, Camerawork, Video Control for a Special.
The SoFi concerts were specifically staged to allow viewers see the most of The Weeknd. Tesfaye had the run of the center of the stadium with an apocalyptic Toronto skyline at one end and a huge suspended moon at the other. No sign of the band, who were hidden out of sight. Tesfaye was on his own apart from 33 dancers parading as red-shrouded sirens who walked as one.
Concert films can be free-running, sometimes allowing the camera positions to operate without timecodes, picking and choosing shots as they go. Bickham wanted a tighter regime for SoFi. “For this particular project there were a couple of differences, just because of the scale of it. It was important to me early on that if I just monitored the board and did a pure just shoot for capture and I didn’t create a line cut, my feeling was that we weren’t going to able to hold that many cameras accountable to each moment,” he said.
“So the way I directed it was a little bit of a hybrid in the sense that I did cut a line cut. When a director cuts a line cut there’s an immediacy that takes place from the operators that you’re working with. Everybody sits up a little straighter and there’s a little more tension than if I asked them to ‘just shoot and I’ll nudge you around.’
“Certainly there are times when that’s important and the best way to approach it. For this I felt creating a little bit of tension and immediacy was important so everyone stayed focused. It’s a long show, top to bottom it’s just under two hours. It’s an easy scenario even for the best team to kind of settle in and perhaps not necessarily to be on top of every single moment throughout the show. So yes, we cut it but with a series of pick ups too.”
These were mostly single close-up Steadicam shots featuring The Weeknd and the dancers. Adding them to the two nights at the stadium gave Bickham a substantial editing job, but inevitably it was all about finding the show. “We wanted to break it apart and understand the shape of the narrative and how we could build it in the edit.”
With around 25 Sony Venice cameras in play — both first- and second-generation, but mostly second — there was a lot of footage to work with. Lenses were primarily Angenieux and Canon Cine zooms, with a couple of prime lenses employed on handheld cameras. “They were all human-operated and were all on my Multiview and so we’re cutting the full volume of the 25 throughout the night,” Bickham detailed. “From night one to night two we augmented the positions of some of those 25 cameras just to enlarge out coverage. That gave us a slightly different mindset going in to night two. It would just accentuate what we had already done on the previous night.”
Designing the cut, it was always planned to let it breathe, especially around Tesfaye who was mostly alone in such a huge space. Bickham explains the thinking: “It was partly because it gives the audience an opportunity to be on stage with him. That’s a very unusual experience especially for a stadium film. Additionally by doing that it creates a tension. The audience are expecting you to cut, they’re expecting to be moved on to something wide or something different and when you don’t do that and you stay kind of locked in to that position something really interesting happens; it makes the next shot that much more impactful.
“So in other words, we kind of lingered even if the song ramped up and became more manic. We didn’t let the pace of the moment dictate the pace of the film.”
Bickham and Le Mar Taylor, The Weeknd’s creative director, had talked a lot about letting moments develop in front of the lens and not blasting through the coverage. “We wanted our performance to be more like a film edit.”
The concert film was meant to be celebratory career moment for Tesfaye and the means of capture was always up for discussion, with even IMAX put forward as a medium. “We did think about using 35mm film, in fact through our discussion we did end up partnering with a company called Live Grain,” Bickham recounted. “We wanted the concert film to have representation of the texture of film to push in to a space where you don’t particular see it. So that was a huge part of our decision making even through the grade and the finishing. It’s got a timelessness with this textural element to it and just feels different from a typical concert picture.”
The use of the Live Grain process is usually for digitally shot movies. NAB Amplify previously reported on Stephen Soderbergh’s No Sudden Move using the process, but for a live production it’s new.
“The cameras didn’t have any filtration in place just to make sure the process wasn’t disrupted. Live Grain is essentially real time texture mapping. A lot of great films that were shot digitally used Live Grain to make it feel like it’s 35mm. In a multi-camera almost two hour production 35mm itself isn’t really practical with the mag changes and the amount of film you use.”
The use of Live Grain was in fact introduced by HBO, which has an ongoing relationship with the company. “It’s been tested by them many times on films but our film was maybe the first time being used for a live concert application or certainly one of the first.”
The post effect of film grain really nails the timeless cinematic effect, but was there ever an option to broadcast the concert live? “There was a time when we considered a one day IMAX special but when HBO got involved we realized we had a great partner for what eventually we wanted to do and it tied in with the upcoming The Idol drama series.
“Ultimately we were able to bring a more caring approach to it, we could take our time.”
Looking to stay ahead of the curve in the fast-changing world of live production? Learn how top companies are pushing the boundaries of what’s possible in live events and discover the cutting-edge tools and technologies for everything from live streaming and remote workflows to augmented reality and 5G broadcasting with these fresh insights from NAB Amplify:
Kendrick Lamar’s “The Big Steppers: Live from Paris” employed multiple digital cinema cameras to deliver a livestreamed outdoor broadcast.
March 27, 2023
Posted
March 14, 2023
NAB Show To Host “Broadcast District”
TL;DR
The 2023 NAB Show will feature the Broadcast District in the West Hall of the Las Vegas Convention Center. Running from April 15-19.
Broadcast District attendees will have access to NAB Show conference sessions focused on radio and television, the NAB Small and Medium Market Radio Forum, the NAB Diversity Symposium, the NAB Leadership Foundation’s Focus on Leadership Speaker Series, and the Broadcast Engineering and IT Conference.
The TV and Radio HQ will also be located within the Broadcast District from April 16 to April 18 9 a.m. to 6 p.m.
The 2023 NAB Show will feature the Broadcast District in the West Hall of the Las Vegas Convention Center. From April 15-19, the area will include educational sessions, networking and special events for radio and television broadcasters.
“We are excited to enhance the experience for broadcasters with a centralized location that emphasizes community building and makes it easy to navigate the Show,” says NAB Executive Vice President of Industry Affairs April Carty-Sipp.
“TV and radio broadcasters will receive tangible takeaways to help them generate revenue, streamline expenses and innovate while getting insights into what’s next for their business through a tailored experience designed to meet their needs.”
Broadcast District attendees will have access to NAB Show conference sessions focused on radio and television, the NAB Small and Medium Market Radio Forum, the NAB Diversity Symposium, the NAB Leadership Foundation’s Focus on Leadership Speaker Series, and the Broadcast Engineering and IT Conference.
The TV and Radio HQ will also be located within the Broadcast District from April 16 to April 18 9 a.m. to 6 p.m.
NAB Member Lounge – For NAB members to host impromptu meetings, relax, connect with fellow broadcasters and enjoy complimentary refreshments.
NAB Sip-and-Speak Series – A daily series of quick, intimate Q-and-A discussions with the most influential leaders in broadcasting; complimentary beverages included.
Discussion Den – A forum for deep-dive discussions and workshops on topics of interest to broadcasters.
Resource Row – Includes NAB Affinity Program Partners offering exclusive services and cost-saving solutions for NAB members.
Happy Hour Events –Social gatherings for television and radio broadcasters, featuring entertainment and networking.
West Hall exhibits are organized around three curated areas: Connect, Capitalize and Intelligent Content. Exhibitors such as AJA, Amagi, AWS, Harmonic, Intelstat, LTN Global, MediaKind, Microsoft, Nautel, Planar, Signiant, Telestream, Vislink, Verimatrix, Veritone, Vislink and Vizrt will have West Hall booths.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
IABM and NAB Show are inviting industry members to participate in the IABM MediaTech Business Tracker.
March 14, 2023
The Future of Production Amplified: How Open Source Is Taking Hollywood by Storm
TL;DR
David Morin, executive director of the Academy Software Foundation, discusses the ASWF’s mission and most recent projects, as well as why developing a strong software engineering community is critical to the future of Media & Entertainment.
The ASWF was established in 2018 by the Academy of Motion Picture Arts and Sciences with the mission of supporting and promoting the use of open source software, and to help developers collaborate on and contribute to open source projects.
Open source technology has touched every single major motion picture made in the last 20 years, enabling workflows ranging from render and color management systems to high dynamic range image file formats used by the likes of Wētā FX and ILM.
Videos in “The Future of Production Amplified” Series:
Since the advent of digital workflows, film production has been dominated by proprietary software solutions — the result of countless hours of costly R&D undertaken by studios and VFX houses. But with the ability to access and customize the source code, open source software has proven to be a cost-effective and flexible solution for filmmakers at every stage of the production process from pre-visualization and storyboarding to compositing and visual effects.
Open source is taking Hollywood by storm. Enabling workflows ranging from render and color management systems to high dynamic range image file formats used by the likes of Wētā FX and ILM, open source technology is in use in every single major motion picture made within the last 20 years.
Aimed at fostering the development of open source tools, the Academy Software Foundation was established in 2018 by the Academy of Motion Picture Arts and Sciences. The ASWF’s mission is to support and promote the use of open source software in the motion picture industry, and to help developers collaborate on and contribute to open source projects. Providing resources and support for developers and engineers who want to contribute to open source projects related to the film industry, the ASWF is supported by a consortium of major film studios, technology companies, and other organizations including SMPTE, MovieLabs and the Visual Effects Society.
As part of NAB Amplify’s video series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with ASWF executive director David Morin about the Foundation’s mission and its most recent projects, as well as why developing a strong software engineering community is critical to the future of Media & Entertainment.
“The open source software credo, if you will, is about sharing software between different organizations that are otherwise competing on the marketplace but can work together,” Morin explains, outlining a system of checks and balances that allow collaboration among competitors.
“We want to encourage our engineers, give them a place where they can go and where to a degree call the shots on the software that they’re developing, and also promote them, promote their good work, recognize it, and make sure that our engineers stay with us in our industry and they have a good pathway in front of them for continuing to develop leading-edge imaging and other types of software.”
In this exclusive Q&A for NAB Amplify’s “The Future of Production” series, David Morin, executive director of the Academy Software Foundation, chats about the Foundation’s mission and its most recent projects, as well as why developing a strong software engineering community is critical to the future of Media & Entertainment.
In Part 1, Morin provides an overview of the ASWF’s goals and objectives, highlighting its mission to streamline and simplify workflows in the motion picture industry. He also discusses the problems the Foundation aims to solve, and the importance of open source software in motion picture production.
Watch Part 1 below:
In Part 2, Morin discusses the newest “sandbox” project to be adopted by the ASWF: the Open Review Initiative, which uses technology from Autodesk’s review and playback software RV along with code contributions from DNEG’s xStudio and Sony Pictures Imageworks’ itview, and aims to build a unified, open source toolset for playback, review and approval. He also covers the Digital Production Example Library, which grew out of the industry’s longstanding need for production-grade sample content in order to test technology in development and ensure that it can scale to the demands of film and TV production.
Watch Part 2 below:
In Part 3, Morin gives us a closer look at the ASWF’s unique structure and membership, and the talented engineers and tech developers who serve on the Foundation’s Governing Board and Technical Advisory Council. He also takes us on a journey through the Foundation’s growth over time, beginning with the initial investigation conducted by the Academy Sci-Tech Council that, with the participation of the Linux Foundation, led to its formation, and defines what a healthy open source ecosystem looks like.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
ETC@USC’s Erik Weaver discusses the making of “Fathead,” a new proof-of-concept for virtual production and cloud-based workflows.
March 9, 2023
The Academy Spotlights This Year’s Oscar Nominees for Best Editing
TL;DR
The Academy of Motion Picture Arts & Sciences has unveiled a nominee spotlight celebrating the talented editors behind the five films nominated for Best Film Editing at the 95th Oscars.
This year’s nominees are a diverse and impressive collection of films, showcasing the immense talent and artistry of their respective editors.
The nominees include “The Banshees of Inisherin,” edited by Mikkel E.G. Nielsen; “Top Gun: Maverick,” edited by Eddie Hamilton; “Elvis,” edited by Matt Villa and Jonathan Redmond; “Everything Everywhere All At Once,” edited by Paul Rogers; and “Tár,” edited by Monika Willi.
Artfully piecing together visuals, sound and music, film editors play a central role in shaping a film’s narrative, painstakingly crafting impactful stories to help captivate audiences.
Ahead of the 95th Oscars ceremony scheduled for this Sunday, the Academy of Motion Picture Arts & Sciences has unveiled a nominee spotlight celebrating the talented editors behind the five films nominated for Best Film Editing.
This year’s nominees are a diverse and impressive collection of films, showcasing the immense talent and artistry of their respective editors.
The haunting yet hilarious The Banshees of Inisherin, edited by Mikkel E.G. Nielsen, is a standout example of how editing can enhance the emotional impact of a story.
The action-packed Top Gun: Maverick, edited by Eddie Hamilton, is a thrilling demonstration of how pacing and tone can drive a film’s narrative.
Elvis, a chronicle of the life of the iconic musician edited by Matt Villa and Jonathan Redmond, is a masterclass in how to craft a compelling biopic.
The visually stunning, heartfelt sci-fi adventure Everything Everywhere All At Once, edited by Paul Rogers, demonstrates how smart editing can create a cohesive, unified whole out of seeming chaos.
Finally, Tár, a powerful exploration of “cancel culture” edited by Monika Willi, is a testament to the editor’s ability to shape the tone and structure of a film.
Watch the video above celebrating these filmmaking talents. As we wait to see which film will take home the coveted statuette for Best Film Editing on Sunday, one thing is certain — the work of these talented editors has enriched and elevated each of the films they worked on.
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Writer-producer-director Todd Field’s first film in 16 years, “Tár” explores the cult of personality, power imbalances and cancel culture.
March 9, 2023
The Academy Spotlights This Year’s Oscar Nominees for Best Cinematography
TL;DR
The Academy of Motion Picture Arts & Sciences celebrates the magic of cinematography in its Nominee Spotlight celebrating the DPs behind the five films nominated for Best Cinematography at the 95th Oscars.
Demonstrating an incredible rang of talent, this year’s nominees take moviegoers from the trenches of World War I to the dazzling stage lights at the Rushmore Civic Center.
The nominees include James Friend (“All Quiet on the Western Front”), Darius Khondji (“Bardo, False Chronicle of a Handful of Truths”), Mandy Walker (“Elvis”), Roger Deakins (“Empire of Light”), and Florian Hoffmeister (“Tár”).
Don’t miss out on the magic of cinematography, the extraordinary art form that breathes life into a film, elevating it to an immersive work of beauty and emotional intensity. Through a complex construction of camera angles, lighting, framing and movement, each shot is a carefully crafted brushstroke designed to help convey mood and atmosphere, creating a sensory experience that lingers long after the credits roll.
Watch The Academy’s Nominee Spotlight celebrating the incredible range of talent demonstrated by this year’s Oscar-nominated cinematographers. From the trenches of World War I to the dazzling stage lights at the Rushmore Civic Center, each film takes moviegoers on a visually stunning journey.
In All Quiet on the Western Front, James Friend captures the harrowing reality of trench warfare in World War I, using striking camera angles and shadows to create a visceral experience. Darius Khondji’s work in Bardo plays with light and color to evoke the surreal, dreamlike atmosphere of the film. Mandy Walker’s cinematography in Elvis transports the audience back to the 1950s, capturing the energy and excitement of early rock-and-roll performances.
In Empire of Light, Roger Deakins creates a mesmerizing visual landscape, with every shot a breathtaking work of art. Finally, Florian Hoffmeister’s restrained cinematography in Tár employed patient and observational long takes to establish an atmosphere and tell the story, while also providing seriously striking images.
Experience the magic of each film through the lenses of these talented cinematographers. Watch The Academy’s Nominee Spotlight now to celebrate the incredible craft of the DPs responsible for the visuals of these five nominated films for Best Cinematography.
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Company 3’s Masick explains that “Tár” writer/director director Todd Field “wanted the images to look authentic and untouched.”
March 8, 2023
The Future of Production Amplified: ETC@USC Takes Virtual Production into the Cloud with “Fathead”
TL;DR
Erik Weaver, director of adaptive and virtual production at ETC@USC, discusses the making of “Fathead,” a new proof-of-concept for virtual production and cloud-based workflows.
The fifth short film to emerge from the ETC Innovation and Technology Grant program, the 20-minute short film was directed by USC School of Cinematic Arts graduate c. Craig Patterson.
Produced almost entirely in the cloud, “Fathead” was shot on Stage 15, Amazon’s new 34,000-square-foot virtual production facility in Culver City.
Powered by cloud-based virtual workstations and rendering services from AWS, the project employed Epic Games’ real-time rendering software Unreal Engine along with Unreal’s MetaHuman Creator to generate digital human characters.
Videos in “The Future of Production Amplified” Series:
With the ability to create immersive virtual environments and integrate live-action footage in real time, virtual production has opened up new creative possibilities for filmmakers, allowing them to bring to life stories and worlds that were previously impossible to create. Meanwhile, cloud-based workflows have enabled remote collaboration and streamlined production pipelines, reducing costs and increasing efficiency.
Aimed at pushing the boundaries of virtual production and cloud-based workflows, the Entertainment Technology Center at USC (ETC@USC) has unveiled the first part of a new white paper, “Cloud Computing: Growth without Bounds,” about the fifth short film to emerge from its ETC Innovation and Technology Grant program.
Fathead, a 20-minute short film directed by USC School of Cinematic Arts graduate c. Craig Patterson, showcases the potential of virtual production, addressing questions about how to optimize workflows to essentially eliminate travel and minimize the number of on-set participants. Produced almost entirely in the cloud, Fathead was shot on Stage 15, Amazon’s new 34,000-square-foot virtual production facility in Culver City. Powered by cloud-based virtual workstations and rendering services from AWS, the project employed Epic Games’ real-time rendering software Unreal Engine along with Unreal’s MetaHuman Creator to generate digital human characters.
As part of NAB Amplify’s video series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with Erik Weaver, director of adaptive and virtual production at ETC@USC and executive producer of Fathead, about how the short film was brought to life and its importance as a proof of concept for what’s possible in virtual production and cloud-based workflows.
“We did everything in the cloud, from creation of assets to global coordination of teams and effort through to basically complete environments and virtual art departments and builds, and, finally, shooting on set, obviously, but then doing real-time to the cloud,” Weaver says. “We were able to stream ARRI Alexa raw directly to the cloud, which ended up being faster than writing in the backup drives.”
In Part 1 of our Q&A, Weaver explains why the making of Fathead is significant from a technological standpoint, and its place as the first part of a series of white papers exploring virtual production in the cloud. He also talks about some the biggest challenges to getting Fathead made, as well as what we can expect from the future of film and television production.
Watch Part 1 below:
In Part 2, Weaver describes what it was like to shoot Fathead on Stage 15, Amazon’s new 34,000-square-foot virtual production stage in Culver City, along with the cloud-based workflow the production team employed. He also discusses the lightning-fast ingest speeds for raw ARRI files, how audio was handled in order to avoid additional ADR sessions, and some of the key people involved in the production, as well as how they recreated an entire junkyard on Stage 15.
Watch Part 2 below:
In Part 3, Weaver talks about some of lessons learned from making Fathead including how crucial planning is in on-set virtual production, with “fix it in pre” becoming the new mantra. He also discusses how cloud-based workflows will impact color grading and other post-production processes, as well as the increasing use of digital characters in film and television production. Finally, he identifies some of the biggest barriers to employing virtual production and cloud-based workflows in today’s current ecosystem.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
Katrina King, global strategy leader for content production at AWS, on the impact of cloud production on the future of film & TV production.
March 5, 2023
ChatGPT: Transformative Technology or Imminent Disaster (Why Not Both?)
TL;DR
ChatGPT poses a fundamental questionabout how artificialintelligence will transform the workforce behind all creative media.
The reality is that AI is already a co-collaborator and that creatives will be using tools like Midjourney and ChatGPT to streamline their work, cut costs and iterate new work.
Every time we feed an AI engine it gets better and better at mimicry, perhaps to the extent that one day it will be able to simulate the human lived experience.
There’s a legitimate concern that the majority of art, media, and content will be machine-generated sooner rather than later. But how likely is that reality?
“While ChatGPT [or chat-based Generative Pretrained Transformer] isn’t exactly Skynet, it does bring up some interesting questions about how it can benefit, or damage, society,” a SMPTE blog post suggests. “Many worry that creative industries, such as writing, content creation, and even art, will see major disruptions within the next few years.”
As it stands today, these programs are far from perfect. Some information in a ChatGPT text might be inaccurate. SMPTE contends that everything from text to video is “extremely simple” when generated by an AI.
This makes generative AI tools good for writing emails, basic copy, or social posts, but they “can’t write a film script that holds any narrative weight since it lacks the capacity for nuance or subtext.”
SMPTE believes that AI programs will “probably” play a huge role in future when it comes to simple tasks but artists and other creatives “shouldn’t be concerned about losing their jobs anytime soon.”
Other commentators beg to differ. Peter Csathy, writing for Wrap PRO, says, “The potential impact of this technology is mind-boggling and should not be underestimated. It will transform all of our lives, including those of us in the arts.”
Take screenwriting. According to Csathy, who put ChatGPT through its paces, the AI text tool “writes a full, beautifully formatted script in seconds. And not just one, mind you. Endless iterations if that’s what we want,” he warns.
“Much to the pleasure of budget-constrained producers and studio bean counters, ChatGPT can churn and burn 24/7 with no union representation,” he adds.
What else? Well, AI has been used in VFX for years and its use will only increase.
“If we allow it, Mubert could take music cues from Danny Elfman’s film scores — not to mention the entire world of soundtracks — and compose entirely new ones in seconds,” notes Csathy. “Thousands of them.”
AI is already making huge inroads and advances in its capabilities are happening at pace.
“We are only at ChatGPT version 1. Just imagine versions 2 and 3. Or version 10 in the year 2030,” Csathy observes.
The question is, what should creators do about it?
The consensus is that we have to learn to work with AI — not try to deny or ignore it. There is in fact a great deal of power in being human that a machine cannot (yet) replicate.
“Even when AI programs do get better at making content, the enjoyment of said content is extremely subjective,” reassures the SMPTE post. “Most people enjoy media that reflects their personal experiences or the experiences of others. Since these programs can’t have human experiences, they will always be one step behind human content creators.”
Matt Griffin, founder of the 311 Institute, explains how he and his 10-year-old son wrote and illustrated a book (a guide to professional runners, with proceeds to charity) in under a day. He explains in an interview for technology developer Arm’s Blueprint blog that it cost just $25 — rather than the $15,000 and the two to three month’s work it would normally have cost.
He thinks the technology is a double-edged sword.
“On the one hand, these technologies democratize access to skills for everybody. That means I can do new things faster, which is great for efficiency, but the artist that I would’ve employed that I no longer need to employ, that’s bad news for them.”
An AI technology developed by Futuri Media could streamline costs and help deliver targeted content for local radio broadcasters. Rogers in Canada and Alpha Media in the United States are beta testing RadioGPT.
“It all starts with our monitoring of a local market,” Daniel Anstandig, founder and CEO of Futuri Media, explained during an episode of the NAB podcast. “We have a… system that looks at everything that’s trending on Facebook, Twitter, Instagram, and over 250,000 news sources. And then based on that, we can see what people are talking about or what they care about in a local market,” he said.
“We take that content and we use GPT-3 to create engaging, dynamic scripts that then we combine with our own back-end and other creative content to kind of position the personality’s voice. We can tie directly into the automation system of a radio station to deliver a voice track that’s essentially real time and just created based on what’s happening in that local market.”
Anstandig says he believed generative tools can be a significant cost reduction tool for media companies.
“Every media company is going to have to figure out how it can be a part of streamlining processes, eliminating cost, bolstering their best talent to do what they do best, and take away some of the tediousness of the brain burn of constantly reconfiguring content for multiple platforms.”
That said, we shouldn’t expect mass redundancies from AI in the creative industries — at least for another decade.
“I don’t think I’d be running right away to GPT to replace my entire content team,” Anstandig said. “I would be thinking about how to leverage the best talent I have on my team, make, make the environment such that we’ve reduced the cost and the overhead of running around doing repeated tasks.”
While ChatGPT is perfectly capable of writing the majority of the content “in publishing and in text and blogs,” he says, “there’s still a human spark and a sense of sourcing information that is necessary.
“Ultimately people trust people. So while [AI] can be helpful in developing first drafts … it’s probably going to be a very creative partner and not necessarily a holistic replacement.”
AI is not going to stop and every time we feed an AI engine it gets better and better at mimicry, perhaps to the extent that one day it will be able to simulate the human lived experience.
What then?
“It’s very easy for us to project our human experience and emotions onto ChatGPT,” he says. “And it won’t be long before we hear about factions of people who want to liberate tech and give it civil liberties and rights. Who knows, in our lifetime that probably will be a real debate at some point around Advanced AI [when it reaches human level intelligence].”
But let’s keep it in perspective.
“Behind the attention-grabbing hyperbole about ChatGPT is a fundamental reality of the tech industry: keep your eyes on the builders, not the commentators,” writes Yves Berquist. “There’s no tech without products. And there are many less high-tech products on our tables than in our podcasts.
“No, ChatGPT is not Artificial General Intelligence (not even close),” he continues. “No, this isn’t the end of Google (they’re doing well with AI thank you). No, as we heard at CES, 90% of content won’t be machine-generated in two years. No, it’s not the end of writing.
“As often we find the truth by following the money. And in the attention economy there’s big money in hysteria for tech commentators and their book agents, not to mention OpenAI’s lucky stock holders.”
With nearly half of all media and media tech companies incorporating artificial intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
This year promises industry-transforming breakthroughs for AI in music composition, video animation, writing code and translation.
February 28, 2023
The Future of Production Amplified: Delivering Broadcast-Quality Graphics to Creators Everywhere
TL;DR
Mike Ward, head of marketing for Singular.Live, discusses how smart graphics, or “Intelligent Overlays,” are driving the future of live remote production.
Intelligent Overlays are the next generation of graphic overlays, and can be localized, personalized and interactive, providing an immediate real-time feedback loop to content creators for higher levels of engagement and interactivity.
As a cloud-native platform, Singular enables remote and cloud-based workflows for productions of all sizes.
Uno, Singular’s new all-in-one graphics solution developed for streamers, is described as “like Canva for live production.”
Videos in “The Future of Production Amplified” Series:
In the fast-paced world of live production, where every second counts, live graphics play a critical role in delivering dynamic and engaging content to viewers around the globe. Whether it’s the crawl on a news channel or real-time score updates on a sports broadcast, production teams toil behind the scenes to create and execute these graphics seamlessly, adding an extra layer of excitement to the viewing experience.
Now Singular.Live aims to democratize live graphics for productions of all sizes with cloud-based tools that enable the creation and delivery of custom graphics in a fully remote workflow. Singular’s live graphics platform also allows users to surpass traditional graphics with a host of customization and personalization options designed to increase viewer engagement and interaction.
As part of NAB Amplify’s video series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with Mike Ward, Singular’s head of marketing, about how smart graphics, or “Intelligent Overlays,” are driving the future of live production. Ward, managing director for Europe, the UK and the Middle East at solutions provider Reality Check Systems, brings more than 20 years of experience in the broadcast space to his role at Singular.
“Intelligent Overlays are the next generation of graphic overlays,” he explains. “They can be localized, personalized and interactive, providing an immediate real-time feedback loop to content creators for higher levels of engagement and interactivity with their audience.”
In Part 1 of our Q&A, Ward defines Intelligent Overlays and explains how they’re helping to revolutionize live production. He also discusses specific use cases, including the creation of interactive menus and live quizzes and polls for Blizzard’s esports productions, along with Singular’s technology partners, and how the Singular platform can help broadcasters and streamers unlock or enhance personalized viewer experiences.
Watch Part 1 below:
In Part 2, Ward explains how the Singular platform enables remote and cloud-based workflows, along with the advantages — and risks — of employing cloud-native platforms. He also discusses the need for redundancies in any live production workflow — cloud-based, on-prem, or a combination of the two — as well as what he sees for the future of live production.
Watch Part 2 below:
In Part 3, Ward talks about Uno, Singular’s new all-in-one graphics solution developed for streamers, which is often described as “like Canva for live production” because of the platform’s simple interface, which allows even novice users to customize and apply overlays from a library of templates. He also discusses how the release of Uno allows Singular to focus more on serving the needs of its enterprise customers, and how Intelligent Overlays are transforming video consumption from a passive activity into an active, engaging experience.
Watch Part 3 below:
Connect with Mike Ward on LinkedIn, and follow Singular on Twitter and Instagram. You can also learn more about Singular’s live graphics platform at Singular.Live.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
Digital Domain’s chief technology officer Hanno Basse discusses how the development of digital humans is transforming film & TV production.
March 5, 2023
Posted
February 23, 2023
Watch This: Reworking the Workflows for Live Streaming
TL;DR
There’s no cookie-cutter approach to live streaming at Disney+, which tweaks its bitrates according to service, sport and whether the content is live or on demand.
Contrary to what many vendors advocate in the live streaming sports space, ultra-low latency is not a priority for major streamers. That comes down to whether audiences would pay more for better content or a fractionally faster service.
The wholesale shift from live production gear to the cloud won’t happen until the older generation of broadcast engineers retires.
Live streaming is often painted as a trade-off between resolution and latency. The bandwidth simply isn’t there to support both. It’s a simplistic equation at best and one that doesn’t match the reality of streamers who are constantly experimenting with the parameters to find the service that best delivers the experience different audiences expect.
“We’re all over the road when it comes to how we’re quickly adapting to the quality for live versus on demand on our service,” said Michael Fay, VP, Software Engineering Disney Streaming Services, in a revealing panel on the topic convened for 2022’s NAB Show Streaming Summit.
“I think that that speaks to the challenges of what the future of live and linear is going to look like. When we produce a pay per view UFC event, for example on ESPN, we stand up an entirely new and unique workflow, and we set the bitrates for that particular event based on what we think the quality of the stream needs to be.”
Fay is responsible for Disney Plus, Hulu Star, Star+ and ESPN+ online distribution strategy on their senior technology team.
“The point is that when it comes to live versus on demand nothing is set in stone,” he continued. “We’re still trying a variety of different things to figure out what is the right answer for our subscribers. That includes live, that includes linear and that includes video on demand.”
Disney is forecasting more growth in live, he said.
“We’re willing to rewrite the book when it comes to live and linear and figure out what’s working. So even a mature brand like Disney Plus is still figuring it out.”
Take bitrates as one metric of service performance. We learn that Hulu’s live adaptive bitrate is 6.9 megabits average while Hulu’s VOD bitrate is 5.6MB/s average. “So, our live content broadcasts domestically at a much higher quality than our on-demand content for Star+, which is available internationally.”
However, ESPN’s VOD bitrate is 9.8MB/s on average and ESPN live content is 6.8MB/s. “Completely inverse of what Hulu is. Then our theatrical content at Disney Plus is 12.8MB/s — a real quality theatrical experience. The reason I share that with you is because the rules for us are changing and we don’t have a rule of thumb for how we treat live versus how we treat on demand.”
Or take latency as another metric. “We ran it up the flagpole and said, ‘How do we want to handle latency?’ And I’m just speaking specifically to a complex nature of a live production for scalable, very large multi-million concurrent connection audiences. The decision that Disney made was we will take higher quality over faster startup times or ultra low latency streams.”
Fay said, “We would rather give the audience in a football match a crystal clear Chyron or lower third than be within 3 seconds of the broadcast program over the air. We made that decision.”
Disney could have chosen to run on WebRTC and reduce the latency and give everybody the same stream within three or 4 seconds but opted for a higher quality audio-visual experience instead.
He said, “We find that our subscribers care more about having an awesome high quality experience than they do about having a low latency experience. And that goes for breaking news and that goes for sports.”
Magnus Svensson, Media Solution Specialist and Partner, Eyevinn Technology (also the panel moderator) agreed with Fay, “Because if you ask a viewer if they will pay extra to get a few seconds lower latency or if they will pay extra to get better quality streams for 4K and HDR, I think we all know the answer. The viewers will pay for quality. Very, very few will pay for getting five or two seconds lower latency. These users are probably somewhere else doing live betting.”
Speaking for US-based hybrid broadcast and broadband TV delivery service Evoca TV, Imran Maskatia, VP, Product Development, said, “We don’t get complaints on latency, honestly. I also don’t think it has to be latency or quality. I think there’s an art form and every delivery mechanism is different. I think there there’s a happy middle ground where you can achieve both quality and less latency.”
Ironically perhaps, the Disney executive said he envied the relative simplicity of a Netflix since it doesn’t have a live product. He indicated that having live and VOD services meant the potential existed for the company to introduce more interactive features at the cost of complexity.
“I don’t count anything as being dead and, you know, keep a flexible and open framework so that if data integration with a particular live program for sports or breaking news needs to be incorporated, you’re capable of doing that and your services able to do that.”
The panel also discuss the shift of live production infrastructure from on prem to cloud concluding that this won’t happen for most of the industry for at least five years and perhaps up to ten years.
“I believe that it’s a generational thing,” said Fay. “Younger software engineers and younger hardware engineers are going to be more tolerant of the cloud than older broadcast engineers who are getting into the streaming space, who want to see the red led flashing to know that that thing is working.”
Maskatia added that legacy investment was also an impediment to change; “there will be a purchase point at some point in future where people will switch from on prem to cloud, but it won’t be immediate. It will be when the cost is cheaper.”
Looking further ahead, the panel consider the extent to which streaming services can go beyond the capabilities of broadcast and introduce personalization and interactivity.
Pierre-Louis Theron, VP, Product Management, Content Delivery, Lumen Technology, said that it had to bring real value. “What exactly it is to potentially watch something together in the metaverse. We don’t really have any idea yet. There’s been some examples (such as Watch Groups). It’s probably working for some content, not for some other content. I don’t think we have yet figured it out.”
Fay suggested that the next iteration of Disney Plus “is where you’re getting like all these Pixar characters and you have your own avatar, you have a whole Pixar experience.”
Held April 17-18 as part of NAB Show in Las Vegas, the two-day Streaming Summit focuses on streaming business and technology.
A two-track event that includes fireside chats, technical presentations, case studies, and roundtable sessions, the Streaming Summit covers topics including: the bundling of content; codecs; transcoding; live streaming; video advertising; packaging and playback; monetization of video; cloud based workflows; direct-to-consumer models; and the video ad stack.
This year’s programming includes:
• Defining the User Experience for Live Sports Streaming • Packaging and Distribution Strategies for Subscriber Engagement • Challenges and Best Practices for Delivering Video at Scale • The Business of Sports Streaming: Monetization Opportunities and Challenges • FAST, AVOD and SVOD: OTT Business Models for Every Consumer • Challenges and Opportunities in Measuring Video Advertising • Scaling Cloud Based Workflows for Quality and Price • Cord Cutting, Linear TV, and the New Streaming TV Bundle
NAB Show is the preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry.
March 10, 2023
Posted
February 22, 2023
What’s Next for Live Streamed “Experiences”?
TL;DR
Electronic musician deadmau5 talks about his experiences pioneering live music streams in a recent keynote session with Streaming Media
The Canadian DJ thinks the biggest problem with any live streamed concert is the back-and-forth interplay that happens between performer and audience in an arena.
Real-time rendering in the metaverse might provide an answer to these issues.
Live streaming gigs rose to the foreground during lockdown and are now big business with companies like Live Nation promoting live streamed experiences for music fans who couldn’t, or choose not to, attend the concert.
Performers like electronic musician deadmau5 have been doing this for well over a decade, helping to pioneer the experience. In doing so, the Canadian DJ, whose name is Joel Zimmerman, has taught himself a lot about streaming technology from hardware switchers to the latest codecs but nothing in his view has come close to replacing performing live in front of a crowd.
“There’s much to be said about being in a venue with an actual human. That’s some level of interaction and communication. [That said] streaming just lends itself so easily to [not] requiring big travel demands or set ups.”
For years, deadmau5 used Twitch or Mixer to stream live but during the pandemic he launched mau5trap, a proprietary streaming service. He also joined video streaming platform StreamVoodoo as an equity partner.
“All musicians and artists need a solution like this today and for the future,” deadmau5 explained in a 2020 press release. “I was working on bringing my shows online and starting my own streaming platform to connect with my community. Not just for me, but for all musicians in the world.
“After the pandemic began, we needed a solution for video quality with excellent sound, beyond anything that has been done before. We tested every other provider offering and I didn’t find what I was looking for. StreamVoodoo is phenomenal for live concerts and streaming sessions at scale with no latency.”
To the StreamingMedia audience, he claimed there was a lack of a video-centric streaming platform: “Zoom was not built with video in mind,” he said.
A primary issue is latency. “If [the stream] is struggling to keep up within two milliseconds and it’s having a hard time doing that, then that just breaks it.”
He remains concerned about achieving the same live alchemic interaction between the musician and audiences in a stadia in an online environment.
“You go see the Foo Fighters and see no show is the same. Maybe the setlist stays the same, but there’s always some story, there’s always some back and forth between Dave [Grohl] and the crowd and all the other guys. So that’s like something that that is there in the moment, for those people that are right there.”
He implies that if the streaming experience isn’t right then viewers are “just a fly on the wall… you’re not inclusive into that experience,” he said.
“A ticket holder [who has] invested in you, in that moment in time, to have exclusivity of that moment and being in the audience and being in the crowd,” is a missing element in some live streams.
Hosting live concerts in the metaverse with real-time rendering is a potential answer. He explains, “I don’t foresee the future being a camera’s perspective of something because then you’ve just locked it to whatever they give you. I’m at a concert and then I want to walk around that front row. So we’re going to need some kind of volumetric representation of that so that I can go and do that.”
He tells StreamingMedia that he gets his inspiration for live streaming from live streamers on Twitch such as TheSushiDragon. “This kid, you got to look him up. He’s got this warehouse in Montana and just every new gadget, every new little handheld toy, every kind of IP camera or follow camera robot.
“He’s just built this huge playpen and he does live editing, live streaming, and uses all peripheral technology to do these great live edits.
“So I’m finding all these characters, you know using all this different tech. You just got to kind of find what works for you.”
Kendrick Lamar’s “The Big Steppers: Live from Paris” employed multiple digital cinema cameras to deliver a livestreamed outdoor broadcast.
March 19, 2023
Posted
February 21, 2023
Video Viewing Habits: What To Know, How It’s Going To Go
TL;DR
For the first time, US adults will spend more time watching digital video than traditional linear TV, according to the latest forecast from Insider Intelligence.
TikTok versus Netflix will be a major trend to watch this year. The lines between social and entertainment have blurred, and TikTok is now coming for the bigger-screen video players.
Twitter is in decline and there’s not much Elon Musk can do about it.
TikTok will soon surpass Facebook as the most-consumed social network among US adults, according to a new report. It’s part of a wider trend that sees the time adults spend watching digital video overtake that of cable and satellite delivered TV for the first time in 2023.
The briefing from Insider Intelligence forecasts that this year, adults in the US will watch on average two hours and 55 minutes of TV per day but digital video time will climb to three hours and 11 minutes. It means that linear TV will drop below 50% of total time spent with TV for the first time while digital video will make up 52.3%.
“The growth of digital video is especially impressive when you consider that, as recently as four years ago, it accounted for roughly half of TV time,” comments Paul Verna, principal analyst and head of the digital advertising and media desk at Insider Intelligence.
“And bear in mind that our time spent forecasts are for adults only. Given teens’ preferences for social and streaming video over TV, we can expect these trends to continue to shift in favor of digital.”
Live sports shifting to streaming services is one reason why digital video consumption is surpassing TV. YouTube won the NFL Sunday Ticket at the end of last year, stealing Sunday evening football games away from DirecTV. MLB sold the streaming rights to Friday night games to Apple TV+ and Sunday morning games to Peacock.
Another leading factor of digital video time is social video. The report predicts average daily time spent with videos on social networks among US adults will climb 9.3% to 45.2 minutes this year. TikTok is a key driver: Its total time spent among all US adults will rise 14.2% this year to 17.4 minutes.
With time spent with Facebook on the decline, TikTok will overtake it next year as the most-consumed social network among US adults, the report states.
“TikTok versus Netflix will be a major trend to watch this year,” noted Jasmine Enberg, principal analyst at Insider Intelligence. “The lines between social and entertainment have blurred, and TikTok is now coming for the bigger-screen video players.
“New TikTok users add incremental new time spent, while its efforts in longer-form video, livestreaming and, more recently, music streaming, keep users on the platform longer. Growth in time spent on Netflix, meanwhile, is stagnant.”
Twitter use among US adults is in decline — expected to drop 10.7% this year and another 13.3% in 2024 in total time spent on the platform. Insider Intelligence says that the problem is that Twitter’s efforts to encourage more original videos, from Vine to Fleets, have so far been unsuccessful.
“Twitter owner Elon Musk’s attempts to bring more video to the app, including potentially incentivizing YouTube creators to post to Twitter, will be futile at improving time spent among all US adults unless he also manages to stave off a user decline.”
Technology and societal trends are changing the internet. Concerns over data privacy, misinformation and content moderation are happening in tandem with excitement about Web3 and blockchain possibilities. Learn more about the tech and trends driving humanity’s digital future with these hand-curated articles from the NAB Amplify archives:
Going after Google and Amazon, TikTok’s influence on the music industry, publishing, fashion, and Hollywood has only just begun.
February 21, 2023
The Future of Production Amplified: Bringing Finishing to the Cloud
TL;DR
Katrina King, global strategy leader for content production at Amazon Web Services, discusses the Color in the Cloud workflow and the problems it attempts to solve.
King spearheaded the Color in the Cloud workflow, which aims to provide a truly end-to-end solution for film and television production in the cloud, and which received an HPA Engineering Excellence Award in 2022.
King explains why ingesting assets to the cloud without the ability to finish is “essentially a bridge to nowhere,” and how advancements in faster image processing and low latency can help production teams achieve their creative goals.
Videos in “The Future of Production Amplified” Series:
Remote cloud production has grown by leaps and bounds in recent years — partially driven by the COVID-19 pandemic — but workflows must include every step of the content production cycle for cloud production to truly reach maturity.
Finishing, with its need for precisely calibrated monitors and other considerations, such as security, has presented a number of challenges to cloud-based workflows. But now Amazon Web Services has introduced it new cloud-based “Color in the Cloud” workflow, which aims to provide a truly end-to-end solution for film and television production in the cloud. As part of NAB Amplify’s video series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with Katrina King, global strategy leader for content production at AWS, about the Color in the Cloud workflow and the problems it attempts to solve.
King spearheaded the AWS Color in the Cloud workflow, which received an HPA Engineering Excellence Award in 2022. She is a motion picture technologist and production technology executive with more than 20 years of experience supervising production and technology for motion pictures, television and live events, and has authored nearly a dozen patents in film and TV production, interactive streaming and cloud-based post-production.
“To be able to color grade, you need a reference monitor, which is a high-fidelity monitor that allows you to see in the true pixel without any compression, artifacting or encoding. Traditionally these activities have been done in light-controlled rooms with a lot of very expensive equipment, very specific calibrated monitors, and workflows that allow for that true original pixel to be transmitted from the original camera files through to the final pixel,” King explains.
“Traditionally, it’s been virtually impossible to connect a reference monitor to a remote instance without heavily compressing the signal, which kind of defeats the purpose of doing it in the first place,” she continues. “So what we set out to do was to develop a workflow that would allow us to be able to connect a reference monitor to a remote instance, which would unlock really high-fidelity color grading in the cloud.”
King says AWS is excited about scaling the workflow to other applications and workloads as well. “Now that we have a viable method to decode the signal and transport the signal, really it’s just a matter of a variety of different applications integrating the Cloud Digital Interface software development kit that would allow them to leverage the same workflow with the same hardware and the technology that’s been developed,” she notes.
“We’d like to see the technology start to be moved and used in quality control, in VFX compositing, other color grading finishing applications. Really any workload that requires really high-fidelity reference monitoring, we’d love to see adopt the Color in the Cloud workflow.”
In Part 1 of our Q&A, King explains why ingesting assets to the cloud without the ability to finish is “essentially a bridge to nowhere.” She also describes the primary challenges to finishing in the cloud, the main requirements for a viable cloud-based color and finishing solution, and how the Color in the Cloud workflow enables faster image processing, low latency, and even monitor calibration.
Watch Part 1 below:
In Part 2, King talks about the biggest developments in cloud production she’s observed in recent years, how advancements in faster image processing and low latency can help production teams achieve their creative goals, and what she sees for the future. She also discusses how Company 3 supported Amazon’s new Lord of the Rings series, The Rings of Power, which was fully produced in the cloud.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
Digital Domain’s chief technology officer Hanno Basse discusses how the development of digital humans is transforming film & TV production.
February 19, 2023
Posted
February 19, 2023
That’s How You Do It: Sam Pollard on Making “Bill Russell: Legend”
TL;DR
When former HBO Sports President Ross Greenburg approached Sam Pollard two years ago about doing a documentary on NBA legend Bill Russell, Pollard jumped at the chance.
Bill Russell: Legend premieres on Netflix February 8 and includes the last interview with Russell, an 11-time NBA champion with the Boston Celtics.
The two-part documentary, directed by Sam Pollard (MLK/FBI, Mr. Soul!), weaves interviews with archival footage and excerpts from Russell’s memoirs, to tell the basketball legend’s story.
When former HBO Sports President Ross Greenburg approached Sam Pollard two years ago about doing a documentary on NBA legend Bill Russell, Pollard jumped at the chance.
“I didn’t hesitate. I said yes, because I grew up in the ’60s,” Pollard told WNYC’s Allison Stewart. “I was very familiar with Bill Russell. I was familiar with the rivalry between Bill Russell and Wilt Chamberlain. I was excited to jump in and do this documentary.”
Bill Russell: Legend premiered on Netflix February 8 and includes the last interview with Russell, an 11-time NBA champion with the Boston Celtics. Russell died during the filmmaking process at his home in Mercer Island, Washington on July 31, 2022. He was 88.
The two-part documentary, directed by Pollard (MLK/FBI, Mr. Soul!), weaves interviews with archival footage and excerpts from Russell’s memoirs, to tell the basketball legend’s story. Corey Stoll narrates while Jeffrey Wright reads the excerpts from Russell’s memoirs.
Pollard told Clint Krute during an episode of the Film Comment podcast that one of the biggest challenges “was to say to ourselves, ‘when do we have too much basketball? When do we need to stop and go to something that he was doing off the court?’ And then when we got to his activities off the court, the question we had to ask ourselves was, ‘how long did we stay with that material before we get back to the basketball?'”
The director added that the classic narrative structure they originally had after the tease was to follow Russell’s life chronologically. But Pollard said they decided to show Russell getting drafted in 1956 by the Celtics instead “to create the drama.”
“[Y]ou [Pollard] play with a bit of the established or traditional sports documentary time structure by reversing what we would usually think was gonna happen after the tease that we would start with the origin story narrative,” scholar Samantha Sheppard said during the Film Comment podcast with Pollard and Krute. “But you move us and shift us along and then take us back to more of a familial historical narrative in that sense. And I think that in that way, in watching this, it’s like a trick. Often it does feel quite traditional. It feels even with the time change, still quite chronological at times.”
Sheppard, an associate professor of cinema and media studies in the Department of Performing and Media Arts at Cornell University, authored Sporting Blackness: Race, Embodiment, and Critical Muscle Memory on Screen, which explores sports documentaries and how they represent blackness.
“It [sports documentaries] finally gives these athletes larger context. It lets them speak, it lets them be culturally and critically framed, and it lets them, it lets us as audiences see their sport not divorced from the sociality in which they live,” said Sheppard. “So it’s not a narrative of shut up and dribble, it’s actually ‘tell us more and also show us the sport at the same time.’ So these films become really, really important as a way to provide a greater context to black athletes in ways that we have not seen them on the court, and more particularly off the court in terms of their social or cultural impact.”
Russell played with the Boston Celtics from 1956-1969. During Russell’s career, he scored a long list of achievements, including 11 NBA championships with the Boston Celtics (two of those as a player/coach), 5 NBA Most Valuable Player awards, 12 NBA All-Star games, two NCAA championships, and an Olympic Gold medal.
“What’s interesting about Russell is from one perspective, he seems like this imposing, 6’9 center for the Boston Celtics. Winner, winner, winner, right? But there’s the other side to Bill Russell where he’s extremely thoughtful,” Pollard told Esquire’s Alex Belth. “He’s extremely nuanced about everything in life, not only as a basketball player but as a Black man in America. And he had opinions about everything.”
Off the court, Russell was very involved in the Civil Rights movement, attending the 1963 March on Washington with Dr. Martin Luther King and the 1967 Cleveland Summit as well as speaking out against the Boston bussing issues.
“This man was a real activist,” Pollard told NECN’s Clinton Bradford. “He didn’t just want to be known as a great basketball player, which he was, he wanted to be known as a human being who was well rounded, who had other things on his mind and other issues he wanted to articulate and talk about.”
Pollard wouldn’t have been able to tell Russell’s story without the mountain of archival footage, stills, and articles dug up by archival producer Helen Russell.
“Documentary filmmaking is really being like an anthropologist,” Pollard told Film Comment’s Krute. “You’re doing a deep dive, you’re doing a tremendous amount of research. And the more research you do, the more you find gold, you really find gold.”
But because Russell played in the 1950s and 1960s, some of that footage wasn’t the greatest.
“The one challenge that we as documentarians always face is that when you see this old footage, you say, ‘It looks pretty crappy, and there wasn’t a lot of coverage,'” Pollard told Variety’s Addie Morfoot. “So, you have to sort of take a leap of faith. [We looked at the archives] and would say – ‘Is that Bill Russell?’ But we also knew that we were never going to get the same kind of coverage and quality we see today.”
Even with the at times grainy footage, Pollard still managed to weave a narrative that makes Bill Russell: Legend stand out.
“What helps set the documentary apart is that Pollard has assembled a treasure trove of vintage game footage and vintage interviews, as well as a wealth of new or new-ish interviews with Russell, [Bill] Cousy, Satch Sanders and many of their contemporaries including the aforementioned [Jerry] West, Bill Bradley, Walt Frazier and more,” wrote The Hollywood Reporter’s Dan Fienberg. “There’s a very good balance between the game footage, which accentuates Russell’s grace and athleticism, and the interviews, which concentrate on his intensity and, perhaps more than anything, his intellect.”
Russell was a student of the game, spending hours studying.
“[H]e understood that the game of basketball is just not about being physical, it’s about being mental,” Pollard told WNYC’s Stewart. “It’s about understanding how to position yourself and play against other players, where you should be, where one of your teammates should be to get the ball to take it down to court to get a basket, to know when to get a rebound and where to get the rebound and how to use that. When he was at USF with his future teammate, K.C Jones, they came up with the strategies. That’s what they would call themselves rocket scientists.”
Pollard added: “They were really thinking about the physics of basketball. It just goes to show you that athletes are very intelligent people, they’re not just jocks, they’re very intelligent. Bill took it to another level in terms of understanding the science and the physics of the game and how to use the game to his advantage.”
Alex Belth in his introduction to his interview with Pollard summed the documentary and Russell up: “Bill Russell: Legend reminds us that in the world of team sports, the biggest team player of them all was also perhaps the most singular individualist, too.”
As the streaming wars rage on, consumers continue to be the clear winners with an abundance of series ripe for binging. See how your favorite episodics and limited series were brought to the screen with these hand-picked articles plucked from the NAB Amplify archives:
Adam McKay new docudrama for HBO, “Winning Time: The Rise of the Lakers Dynasty,” shows how the Lakers changed the way basketball is played.
February 20, 2023
Posted
February 19, 2023
“Fathead:” Virtual Production (Almost) Completely in the Cloud
TL;DR
The Entertainment Technology Center at USC (ETC@USC) has released the first part of a white paper exploring the state of the art in virtual production.
Multi-studio production experiment “Fathead” aims to push the boundaries of in-camera VFX and on-set virtual production.
The initiative is co-produced by AWS, Amazon Studios, and partners including Epic Games, Warner Bros., Universal Pictures, ARRI, Arch Platform, Blackmagic and Perforce.
One significant achievement was uploading original camera negative (OCN) to the cloud in hours — a process that would normally take days.
A multi-studio production experiment, dubbed Fathead, aims to push the boundaries of in-camera VFX and on-set virtual production. It’s co-produced by AWS, Amazon Studios, and partners including Epic Games, Warner Bros., Universal Pictures, ARRI, Arch Platform, Blackmagic and Perforce.
Another partner — also funded by Hollywood entities — is the Entertainment Technology Center at USC (ETC@USC), which has released the first part of a case study, “Fathead: Virtual Production & Beyond,” detailing production of the 20-minute short film.
“Everything on this production was done in the cloud, minus the shoot on set,” explains ETC@USC head of virtual & adaptive production Erik Weaver, executive producer of Fathead. “We did some very innovative work, ingesting ARRI Alexa RAW to Amazon S3 buckets on the AWS cloud in real time, which had never been done before and I don’t think has been done since.”
The short was shot on Amazon’s Stage 15 virtual production facility in Culver City and the team benefitted from AWS support engineers, who wrote custom scripts for the real-time cloud ingest.
“It was actually writing to Amazon faster than it was writing to our local backup drive on stage,” Weaver notes.
The paper, “Cloud Computing: Growth without Bounds,” “elaborates on the tools and processes pieces to show how we did it,” Weaver explains. Uploading the original camera negative (OCN) to the cloud would normally take days, but the process was condensed through a combination of the AWS workflow, the technical capabilities of Stage 15, and the digital expertise of the crew.
“The idea was to use a short film as a paradigm for production processes of the future,” Weaver said.
Some 350 people worked on the project, which has received an NAACP Image Award nomination. The cloud-based AWS workflow employed by the Fathead team allowed for usage-based pricing, avoiding the need for large upfront infrastructure investments.
“We used cloud computing as a model for on-demand access to a configurable pool of online resources during the lifecycle of a film,” said Weaver.
Written and directed by Craig Patterson, the film is set in a junkyard with elaborate backgrounds that would have been costly to physically build, not to mention dangerous for the young actors involved. That made the project an ideal case study for a volume stage in which the environments were all built digitally, by teams in Greece, New York and Los Angeles.
Perforce Software’s Helix Core version control allowed artists in the different locations to work on the same scene simultaneously via the cloud. Arch Technologies built the virtual machine that allowed the various tools to interoperate seamlessly and safely in the cloud.
While section one covers the cloud-first aspect of Fathead, further sections of the white paper to be released shortly deal with reducing echo in a volume and another examines the current state of virtual production.
A.J. Wedding from Orbital Virtual Studios discusses how virtual production is transforming film & television production.
June 7, 2023
Posted
February 17, 2023
South Korea’s Synthetic Pop Stars: What Do “They” Mean for the Metaverse?
TL;DR
South Korea is the world’s testing ground for tech, so when K-pop singers compete in a virtual universe what does this tell us about the future of entertainment?
The popularity of synthetic pop stars in South Korea may be peculiar to that culture. Or is it?
Could the merger of virtual with the real create a new genre of content?
With its highly digital and device literate, young and ultra competitive society, South Korea is looked on as the world’s petri-dish for future media. The current vogue for K-pop stars who use avatars or the popularity of entirely virtual singers and influencers means the country is one to watch.
South Korean tech company Kakao Entertainment, for example, is billing Mave, its artificial band, as the first K-pop group created entirely within the metaverse. It is using machine learning, deep fake, face swap and 3D production technology.
To give them global appeal, the company wants the “girls” of Mave to eventually be able to converse in, say, Portuguese with a Brazilian fan and Mandarin with someone in Taiwan, fluently and convincingly.
The idea, the project’s technical director, Kang Sung-ku, tells Jin Yu Young and Matt Stevens at The New York Times, is that once such virtual beings can simulate meaningful conversations, “no real human will ever be lonely.”
Kakao also runs the virtual world called “Weverse” or simply “W.” In part of this world there’s a game show called Girl’s Re:verse that features 30 singers, eliminated over time, until the last five standing form a band. All are members of established K-pop bands or solo artists. But they are all masked as animated avatars.
Strictly speaking, this is not a metaverse, says the NYT. They are instead proprietary platforms users have to log in to, accepting terms of service and with no sign of any cryptographic features.
But the complete blurring of the virtual with the real is surely one core trait of what will become the metaverse.
Another example is a TV reality show, not dissimilar from The Masked Singer, but with a difference. Avatar Singer is a 15-episode music competition that ran live on Korean TV channel MBN . It features celebrity competitors masked as digital 3D avatars complete with superpowers.
As explained by one of the vendors behind the project, the show live motion capture, facial capture, Unreal Engine, and augmented reality. These enabled the team to “expand the conventional stage into an evolved universe.”
Compared with their Korean counterparts, media companies in the United States have only engaged in “light experimentation” with the metaverse so far, Andrew Wallenstein, president and chief media analyst of VarietyIntelligence Platform, tells The New York Times.
Countries like South Korea “are often looked at like a test bed for how the future is going to pan out,” Wallenstein said. “If any trend is going to move from overseas to the US, I would put South Korea at the front of the line in terms of who is likeliest to be that springboard.”
Already Korean “virtual influencers” like Rozy have Instagram followings in six figures and promote real brands like Chevrolet and Gucci.
“We want to create a new genre of content,” said Baik Seung-yup, Rozy’s creator, who estimates that about 70% of the world’s virtual influencers are Korean.
“From a Western perspective, it can seem strange,” Enrique Dans writes in a blog post on Medium. “The [virtual pop] groups all look pretty similar (the manga-style avatars have huge eyes and heart-shaped faces), and are deeply rooted in the cultural codes of the country’s youth.”
He adds, “Young Koreans follow their favorite bands, attend concerts, and celebrate their bands’ rise to popularity as a reflection of their competitive society, where they must gain access to certain schools and universities if they want to find a good job.”
Son Su-jung, a producer for the show, also says that part of the point was to give K-pop singers — “idols,” as they are called — a break from the industry’s relentless beauty standards, letting them be judged by their talent, not their looks.
“Idols in the real world are expected to be a product of perfection, but we hope that through this show, they can let go of those pressures,” she said.
The metaverse may be a wild frontier, but here at NAB Amplify we’ve got you covered! Hand-selected from our archives, here are some of the essential insights you’ll need to expand your knowledge base and confidently explore the new horizons ahead:
Synthetic media, sometimes referred to as “deepfake” technology, is already impacting the creative process for artists (and non-artists).
February 20, 2023
Posted
February 16, 2023
Viewing the Metaverse as a Global Village
TL;DR
The Global Collaboration Village is described as the first global, purpose-driven metaverse platform. The World Economic Forum has launched it in prototype in partnership with Accenture and Microsoft.
A panel at Davos pitted decentralized Web3 disruptor Neal Stephenson versus a senior Meta executive.
While it’s good that the WEF recognizes the importance of standards and universal access to with which to frame a successful next-generation internet, very little in terms of concrete action has resulted to date
Just as the arrival of the internet was heralded as a “global village” that would unite people in a universal exchange of information, there is now an effort to turn the next-generation internet into the “Global Collaboration Village.”
That’s because the vision of a connected global village has gone sour, exacerbating polarized views and leaving many without broadband cut off from participation.
While it far from being built, there’s optimism that the metaverse can be built with sturdier open to all foundations.
At Davos, the glamourized meeting of global financiers, the World Economic Forum talked about establishing the Global Collaboration Village, described as “the first global, purpose-driven metaverse platform.”
Klaus Schwab, founder and executive chairman of the World Economic Forum, described the initiative in an article on the organization’s website. “Created to enhance more sustained public-private cooperation and spur action to drive impact at scale, this global village will not replace the need to meet face-to-face but will instead supplement and extend our ability to connect regardless of where we are physically located around the world,” he said.
“Business executives, government officials and civil society leaders must come together to define and build an economically viable, interoperable, safe, equitable and inclusive metaverse.”
A prototype of the Global Collaboration Village was launched at Davos including Accenture and Microsoft, with the support of “leading global corporations, governments, international organizations, academic institutions and NGOs.”
“To create mass adoption, the metaverse must show that it is not just a replacement for what we already do but that it enables us to do things in new and more effective ways,” declared Schwab.
For example, people will be able to “dive in” to an interactive ocean experience that reveals the importance of safeguarding the ocean through collective action. In other words, instead of telling us how important mangroves are for coastal ecosystems, this global Metaversian vision “invites us to witness and experience the power of restoration and conversation for ourselves — all while engaging with global experts and innovators who are on the physical frontlines of this work.”
The WEF and supporters of this project may mean well, but this appears on the surface to be little more than a marketing stunt with no actual concrete plan to lead development of an open, interoperable internet.
Perhaps that’s understandable given that its paymasters at Davos are Big Tech, including Microsoft and Meta, which have a vested interest in shaping the internet to their own ends.
A bit more meat on these bare bones was provided by a panel discussion at the event featuring representatives from Meta pitched against the author and technologist Neal Stephenson.
Stephenson is building a blockchain intended to help individual creators make more money from the future internet than they currently do in an online landscape dominated by monopolies like Meta, Apple, Microsoft and Amazon.
“What we’re trying to do with the Lamina 1 project is to build a blockchain that is optimized specifically for creators,” says the futurist. “These are the people whose talents are going to be needed to actually make a metaverse that people are going to want to visit. It’s a kind of pure engineering project at this point.”
A core tenet of the metaverse is for people, as avatars, to be able to move friction-free from one experience to another, without having to continually log in and log out.
That means personal data in the form of identity (plus payment mechanisms and virtual assets) needs to be shareable and also secure. It is one of the basic frameworks that the WEF’s Collaboration forum needs to discuss.
For Stephenson there’s no doubt that identity needs to be distributed and decentralized “if the metaverse is actually going to work.”
Christopher Cox, chief product officer at Meta Platforms, agreed with the overall vision, but didn’t quite acquiesce that user identity needs to be decentralized (and outside of Meta’s walled garden).
“We view the feeling of presence as being the essential ingredient for the user experience, of something that feels metaverse-like,” he said. “I think the Internet’s a pretty good way to think about the metaverse because some parts of the Internet are very coherent with each other. If you’re inside of Wikipedia, if you’re inside Instagram, you know, these are experiences that are self-consistent, that have a single designer, that have a single server, that have a single privacy and identity model where you understand the rules. Those systems are interlinked,” he said.
“So you can move from Instagram easily to Google Maps. You’re not confused how you got there. The hyperlink was the thing that got you there. And I think part of what doesn’t exist yet for the metaverse is what is the hyperlink? What is the model of travel from sort of one set of experiences for the other?”
Stephenson conceded that both models have a decentralized, bottom-up approach to building the metaverse and that centralized, top-down approaches had their advantages, but came down on the side of Web3.
“[The metaverse] doesn’t happen unless you create an open system that’s kind of analogous to the early Web or the early Internet where anyone who’s interested can latch on to a shared protocol and begin to build what they want to build in that world.”
However, invited by the moderator to challenge Meta on the topic, Stephenson declined.
For his part, Cox largely swerved the answer, but toed the party line:
“One thing about the development of Facebook and Instagram is a lot of it is focused on giving tools to creators and tools to businesses. The creative tools that we give them is a lot of what makes the experience unique, along with some set of assurances around safety and privacy, which is where the centrality can offer a big benefit to the user.”
The digital divide is a conundrum for any kind of internet universality — in poorer or rural parts of the US as much as Rwanda, a country represented by a government minister on the panel.
Most people imagine the metaverse, and its experience of immersivity and presence, will be accessed not via a 2D flat screen but in 3D via AR or VR hardware.
“We believe that one day that computing platform will be as important as the smartphone has become in our lifetimes,” said Cox. “We’re working on a lot of the early R&D to bring that to life.”
He explained that Meta had spent the last eight years since acquiring Oculus to deliver a VR product line that is affordable enough, usable enough and impressive enough that it can be used in social experiences.
“We’re working on augmented reality, which is a much further out version of the future where you would wear, you know, a nice pair of glasses. It would be light, it would be comfortable, it would have waveguides that would allow you to see screens in front of you.
He also said Meta was building software to support a developer ecosystem of developers including filmmakers who are starting to make 3D content.
“We’re really just trying to start to seed the ecosystem of content and experiences for VR.”
For all its talk about the metaverse as a driver for progress and the rhetoric behind the WEF’s Global Collaboration Village, it was clear that in honesty not much progress had been made and nor would there be when those controlling the divided internet of today want to control it tomorrow.
Stephenson appeared exasperated too. In response to a question about how the metaverse can be engineered he said,
“In order for everyone to not die, we have to remove carbon from the atmosphere on a scale that is completely mind boggling, even to people who consider themselves really well informed about this issue. And that’s going to be the biggest engineering project in the history of the world.”
The metaverse may be a wild frontier, but here at NAB Amplify we’ve got you covered! Hand-selected from our archives, here are some of the essential insights you’ll need to expand your knowledge base and confidently explore the new horizons ahead:
The ‘70s-Inspired Visuals of Benjamin Caron’s “Sharper”
TL;DR
For his debut feature “Sharper,” director Benjamin Caron wanted cinematographer Charlotte Bruus Christensen to be the “Princess of Darkness” in homage to cinematographer Gordon Willis.
Willis famously shot “The Godfather,” “Klute” and other movies in next to no light; in the case of “The Godfather” that creative choice was driven by Marlon Brando’s makeup.
“Sharper” is a grifter movie that revels in the use of shadows and underexposed long takes.
Prior to “Sharper,” Caron had notable success directing episodes of “The Crown” and Disney’s Star Wars episodic “Andor.”
Not knowing what will happen is the ultimate tease for a grifter movie like Sharper — the darkness just adds to the mystery
The British director of Sharper, now streaming on Apple TV+, wanted his DP Charlotte Bruus Christensen to become the “Princess of Darkness” in homage to cinematographer Gordon Willis, who famously shot The Godfather and other movies in next to no light.
Rather obviously, Vanity Fair’s Richard Lawson takes a romantic view of using film, unkindly describing the digital alternative’s look as “the plastic dullness of a toss-off digital Netflix thriller.” With Bruus Christensen’s film aesthetic, however, he warmly welcomed “the grain and light of what movies used to look like.”
In truth, Willis’ approach to lighting — particularly in the initial scene of The Godfather — occurred to him only at the last minute as a means to counter the strange makeup Marlon Brando was using. Just 20 minutes prior to the shoot, the only technique he could think of was to use a top light. Ultimately, this decision sealed the look of the movie from that point on. But maybe the die was already cast with his moody aesthetic for Klute, which he shot the year before, in 1971.
But Lawson’s coupling of the use of film with an old-fashioned con artist tale is understandable, clumsy as it might be, as Sharper is a thriller that revels in the use of shadows and underexposed long takes.
The director, Benjamin Caron, was new to feature films but had notable success in directing episodes of The Crown and Disney’s brilliant Star Wars episodic Andor. But for Sharper, he had asked Bruus Christensen “to think about these sophisticated compositions of using light and darkness,” as he told SlashFilm’s Ben Pearson. “But probably one of the biggest reference points for me was Klute. There was just something about the atmosphere of that film that I’ve always loved.”
Describing Willis’ work, Caron says, “He just basically infused every frame with meaning and atmosphere, and there was a beautiful delicacy to it. So it was a heavy leaning into the feeling of that film.” (As an aside, this 1971 film has been having a hell of a cultural resurgence as of late, BJ Colanelo notes at SlashFilm, with director Matt Reeves also citing the film as a massive influence on The Batman.)
Caron also referenced The Color of Money, Drive and especially Fincher’s Seven. “What I loved about that film is that you were so claustrophobic for such a long period of time. You were held in that city. It was all mainly shot at night and it was rain, but then right at the very end of the film, you suddenly had this big desert expanse where there was nothing else.”
He could see that same scenario working for Sharper, he told Pearson. “We had all these characters penned into Manhattan, where the sight lines are limited and you can rarely see the horizon. But then, as in Seven, I love the end where suddenly you’re in this open space where you can see nothing but sky, and ultimately the characters have nowhere to hide.”
Apple’s own description of Sharper does harken back to thrillers of the past: “No one is who they seem. A neo-noir thriller of secrets and lies, set amongst New York City’s bedrooms, barrooms and boardrooms. Characters compete for riches and power in a high stakes game of ambition, greed, lust and jealousy that will keep audiences guessing until the final moment.”
Pete Hammond’s review of the film for Deadline describes the pull of this new swindler story. “Seeing the nifty grifter drama Sharper reminded me how rarely we encounter this kind of clever cat-and-mouse game that might fall into the noirish genre but really relies on diving into a world filled with characters who reveal slices of their lives that keep changing moment to moment,” he writes.
“It is the kind of movie I find enormously difficult to review because its ultimate success for a viewer is just watching it unfold, beat by beat, never quite knowing exactly where it is heading but still glued to the screen to find out,” Hammond continues.
“Written in a non-linear style and separated by chapters identified on the screen with character’s names, the focus keeps changing as we see events unfold, and eventually intertwine, as the story takes twists and turns and then twists right back again.”
Julianne Moore in “Sharper,” CR: AppleTV+
Sebastian Stan, Julianne Moore in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Sebastian Stan, John Lithgow in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Briana Middleton in “Sharper.” Cr: Apple TV+
John Lithgow in “Sharper,” CR: AppleTV+
Sebastian Stan, John Lithgow in “Sharper,” CR: AppleTV+
Justice Smith, Briana Middleton in “Sharper.” Cr: Apple TV+
But it is director Caron, in his first feature, who kept the lid on what the characters were thinking, not wanting to clue the audience into the deceit. “Deception is definitely the defining feature of this film, and I’m always interested in character’s motivations and how people talk or flirt or lie or impersonate in terms of getting what they want,” he said to Pearson.
“I thought it was really important in this film that we never had a nod and a wink to the audience at any moment that something was about to happen. Sometimes I think there’s a tendency, whether it be from the storyteller or even from the performer, to show too much.
“And I think right from the very beginning, even in conversations with the actors, we wanted to hold all of that back. Because I really remember reading the script and I really remember those moments where I was floored and I was genuinely shocked and surprised. So it was really important they held onto that integrity.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Cinematographer Felix Wiedemann uses the ARRI Alexa LF to create a naturalistic look for Netflix’s hit psychological thriller series.
February 15, 2023
The Future of Production Amplified: Bringing Digital Humans into the Spotlight
TL;DR
Hanno Basse, chief technology officer at Digital Domain, discusses the use of digital humans in film and television production.
Charged with guiding Digital Domain’s technical infrastructure, Basse oversees the ongoing development of the studio’s Digital Human Group and its AI-powered tools.
Celebrating its 30th anniversary in 2023, Digital Domain is at the forefront of many of the groundbreaking technologies employed to create believable digital characters capable of delivering emotionally nuanced performances.
In just the last year, the VFX house has created digital humans for Marvel features “Black Panther: Wakanda Forever” and “Doctor Strange in the Multiverse of Madness,” as well as Disney+ series “She-Hulk: Attorney at Law.”
Videos in “The Future of Production Amplified” Series:
As part of NAB Amplify’s video series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with Hanno Basse from Digital Domain about how the use of digital humans is transforming film and television production.
Basse is chief technology officer at Digital Domain, where he oversees the ongoing development of the studio’s digital human and autonomous human technologies, along with its AI-powered tools. With more than 30 years in the industry, serving as CTO at both Microsoft Azure and 20th Century Fox Film, Basse is charged with guiding Digital Domain’s technical infrastructure, including the pipeline and proprietary software development that helps its artists deliver award-quality work for any medium.
Celebrating its 30th anniversary this year, Digital Domain has been at the forefront of many of the groundbreaking technologies employed to create believable digital characters capable of delivering emotionally nuanced performances. In just the last year, the VFX house has created digital humans for Marvel features Black Panther: Wakanda Forever and Doctor Strange in the Multiverse of Madness, as well as Disney+ series She-Hulk: Attorney at Law.
The studio’s Digital Human Group has been exploring digital puppetry since its groundbreaking work on 2008’s The Curious Case of Benjamin Button. “That’s where the journey for digital humans actually really started at Digital Domain,” Basse recounts, describing the slow, painstaking process for aging Brad Pitt in reverse during post-production. “The technology has developed since then to such a degree that we can actually do this in real time now. And so we created the Digital Human Group.”
Advancements in game engine technology have completely transformed film and TV production, Hanno acknowledges. “What I’m really excited about with real-time technology is that it really allows decision making in the moment,” he says. “It really allows directors and visual effects artists, and actually even actors, to iterate and try different things in real time. And they see the result of that immediately.”
In Part 1 of the conversation, Basse explains how Digital Domain’s Digital Human Group was established, alongside Project Digi Doug, a comprehensive digital database of a real person, DD’s head of software R&D, Doug Roble. Basse also discusses how digital humans can be employed for virtual production, and the level of nuance that can be captured from an actor’s performance and applied to digital characters.
Watch Part 1 below:
In Part 2, Basse talks about Digital Domain’s use of digital humans in Marvel features Black Panther: Wakanda Forever and Doctor Strange in the Multiverse of Madness. He also addresses the use of AI and machine learning in the creation of digital humans, and how these tools can automate mundane tasks, freeing artists to unleash their creativity.
Watch Part 2 below:
In Part 3, Basse identifies the biggest changes he foresees for the visual effects industry and how studios can to adapt to them, including the impact of major advancements in real-time rendering software and hardware, and the ability to capture final imagery with in-camera VFX. He also discusses the use of digital humans for visualization, clothing simulation, and the studio’s creation of a digital Dr. Martin Luther King, Jr. for the cover of Time magazine.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
A.J. Wedding from Orbital Virtual Studios discusses how virtual production is transforming film & television production.
October 11, 2023
Posted
February 12, 2023
How Synthetic Media Is Completely Changing … All Media
TL;DR
Synthetic media is already changing the creative process as artists (and non-artists) utilize AI for production assistance.
Generative AI can now create text and videos, and even clone people’s voices, but some experts fear it could contribute to the spread of misinformation and fake news.
The next evolution is data-driven synthetic media — created in near-real time and surfaced in place of traditional media that could see hyper-personalized ads created and delivered within seconds to your phone.
Synthetic media, sometimes referred to as “deepfake” technology, is already changing the creative process as artists (and non-artists) conscript AI for production assistance. From videos to books and customizable stock images and even cloned voice recordings, we can now create an infinite amount of content in seconds with the latest generative AI technology.
Israeli firm Bria helps users to make their own customizable images, in which everything from a person’s ethnicity to their expression can be easily modified. It recently partnered with stock photo image giant Getty Images.
“The technology can help anyone to find the right image and then modify it: to replace the object, presenter, background, and even elements like branding and copy. It can generate [images] from scratch or modify existing visuals,” Yair Adato, co-founder and CEO at Bria, told Maya Margit at The Media Line.
Another Israeli startup, D-ID Creative Reality Studio, enables users to make videos from still images. It is working with clients, including Warner Bros.
“With generative AI we’re on the brink of a revolution,” Matthew Kershaw, VP of commercial strategy at D-ID, tells Margit. “It’s going to turn us all into creators. Suddenly instead of needing craft skills, needing to know how to do video editing or illustration, you’ll be able to access those things and actually it’s going to democratize that creativity.”
Kershaw believes that soon, people will even be able to produce feature films at home with the help of generative AI.
In fact, analyst Gartner has put a timeline of just seven years on that. It expects that by 2030 a major blockbuster film will be released with 90% of the film generated by AI (from text to video).
AI in Marketing
The rise of synthetic media has made it easier than ever to produce deepfake audio and video. Microsoft researchers, for instance, recently announced that their new AI-based application can clone a person’s voice with just seconds of training. Called VALL-E, the app simulates a person’s speech and acoustic environment after listening to a three-second recording.
Such generative models are potentially valuable across a number of business functions, but marketing applications are perhaps the most common.
DALL-E 2 and other image generation tools are already being used for advertising. Heinz, Nestle and Mattel are among brands to be using the technology to generate images for marketing.
Ogilvy Paris developed this campaign for Nestle brand La Laitière using generative AI (and drawing inspiration from Vermeer’s painting “The Milkmaid”)
In fact, the global generative AI market is expected to reach $109.37 billion by 2030, according to a report by Grand View Research.
Jasper, for example, a marketing-focused version of GPT-3, can produce blogs, social media posts, web copy, sales emails, ads, and other types of customer-facing content.
At the cloud computing company VMWare, for example, writers use Jasper as they generate original content for marketing, from email to product campaigns to social media copy. Rosa Lear, director of product-led growth, tells Thomas Davenport at Harvard Business Review, how Jasper helped the company ramp up our content strategy, and the writers now have time to do better research, ideation, and strategy.
Kris Ruby, owner of a PR agency, also tells Davenport that her company is now using both text and image generation from generative models. When she uses the tools, she says, “The AI is 10%, I am 90%” because there is so much prompting, editing, and iteration involved.
She says feels that these tools make one’s writing better and more complete for search engine discovery, and that image generation tools may replace the market for stock photos and lead to a renaissance of creative work.
By 2025, Gartner predicts that 30% of outbound marketing messages from large organizations will be synthetically generated, up from less than 2% in 2022.
The next evolution is being dubbed Generative Synthetic Media (GSM). This is defined by Shelly Palmer, professor of advanced media at the Newhouse School of Public Communications, as data-driven synthetic media — created in near real time and surfaced in place of traditional media.
This could happen very soon following “an explosion” of applications built over large language models (LLM) such as GPT-3, BLOOM, GLaM, Gopher, Megatron-Turing NLG, Chinchilla, and LaMDA.
Again, it is in marketing and advertising where the biggest impact will be felt first. This would range from AI-driven data collection to target specific audiences to the creation of tailored ads all in near-real time.
Palmer supposes that a natural language generation (NLG) platform would generate a script, that content will be hyper-personalized (rather than targeting larger audience segments), and would automatically optimize content for social media, email or display ads and then continuously monitor the performance, ensuring that the content remains relevant and effective.
“It will not be long until the ideas described in this article are simply table stakes,” Palmer declares. This will start with generative AI creating ad copy and still images (GPT-3 and Midjourney APIs), and then we’ll start to hear voice-overs and music. Next we’ll start to see on-the-fly deepfake videos, and ultimately, all production elements will be fully automated, created in near real time, and delivered.”
He thinks this will take “more than a year, less than three years.”
As it stands today, to use generative AI effectively you still need human involvement at both the beginning and the end of the process. As a rule, creative prompts yield creative outputs. The field has already led to a prompt marketplace where one can buy other users’ prompts for a small fee.
Davenport thinks that “Prompt Engineer” is likely to become an established profession, at least until the next generation of even smarter AI emerges.
Deepfake Concerns
Synthetic media is also sometimes referred to as “deepfake” technology, which brings with it more worrying connotations. These concerns raise from identification of authorship and copyright infringement to the ethical muddy waters of fake news and training AIs on datasets biased in terms of race or gender.
LLMs, for example, are increasingly being used at the core of conversational AI or chatbots. Even Facebook’s BlenderBot and Google’s BERT “are trained on past human content and have a tendency to replicate any racist, sexist, or biased language to which they were exposed in training,” reports Davenport. “Although the companies that created these systems are working on filtering out hate speech, they have not yet been fully successful.”
An Australian court ruled in 2021 in favor of AI inventorship (i.e., the AI system could be named as the inventor on a patent application). However, this has been overturned by the Australian Federal Court. Law firm Dentons expects to see lots of developments and change globally on this issue.
In the US, a blueprint for an AI Bill of Rights has been introduced and proposes national standards regarding personal data collected by companies and AI decision-making. According to Dentons, further regulation of AI decision-making is likely to see continued focus from the federal government following the publication of a blueprint for an AI Bill of Rights by the White House Office of Science & Technology Policy.
The EU is already going further and is seeking to benchmark restrictions on the use of AI just as it did successfully with GDPR in its jurisdiction. Expected to be finalized in 2023, the EU AI Act will categorize AI as either being an unacceptable, high or low/minimal risk.
As explained by Dentons, unacceptable-risk AI systems include, for example, subliminal, manipulative or exploitative systems that cause harm.
The law firm says, “We look forward to 2023 being a fruitful year in terms of the increase in scope for AI deployment, and also inevitable regulation, with the possible exponential increase in legal disputes relating to AI.”
Interested in how artificial intelligence will impact technology, business, and creativity? (How can you not be?) Ride the wave into the future of Media & Entertainment, where curiosity meets innovation meets storytelling, with NAB Amplify’s dedicated resource exploring the transformative force of AI. Dive into explainers that demystify complex concepts, discover real-world applications in Hollywood, and glimpse the road ahead. Aimed at industry professionals working at all stages of the content lifecycle, these resources are your gateway to understanding how AI is reshaping the entertainment landscape. Join us, and let’s Amplify the conversation!
Artificial intelligence can help companies manage the increasing amount of video being shared, particularly among mobile-natives.
February 8, 2023
The Future of Production Amplified: Pushing the Limits of Virtual Production
TL;DR
A.J. Wedding, director and co-founder of Orbital Virtual Studios, discusses how virtual production is transforming film & television production.
Wedding served as virtual production supervisor on season 5 of FX series “Snowfall,” which saved nearly $50,000 per episode by reducing shooting time, transportation between locations, and crew.
Virtual production supervisors can help guide the virtual production process, and the ideal time to bring one on is as early in the pre-production phase as possible.
Videos in “The Future of Production Amplified” Series:
As part of NAB Amplify’s video series, “The Future of Production Amplified,” NAB Amplify content partner Jennifer Wolfe chats with A.J. Wedding from Orbital Virtual Studios about how virtual production is transforming film & television production.
Wedding is director and co-founder of Orbital Virtual Studios, a virtual production studio based in the heart of Los Angeles, and the virtual production supervisor for FX Series Snowfall. As the former head of production for Raleigh Studios, he has produced and directed countless commercials and ad campaigns as well as live broadcasts. He also hosts the industry-focused podcast, “The Callsheet.”
Snowfall made waves last year for its use of LED walls to create real-time backdrops of Los Angeles for Season 5, saving nearly $50,000 per episode by reducing shooting time, transportation between locations, and the number of crew.
In Part 1 of the chat Wedding discusses the mechanics of virtual production and the core technologies that power it, as well as what DPs have to say about VP in terms of lighting, stationary LED volumes vs. mobile workflows, and what he sees for the future.
Watch Part 1 below:
In Part 2, Wedding talks about the essential things production teams need to understand about using virtual production before diving in, including options for various budgets and the advantages it offers, as well as the challenges virtual production presents for DPs, and some of the biggest advancements from over the past year.
Watch Part 2 below:
In Part 3, Wedding shares what he’s learned from his experience as virtual production supervisor for FX series Snowfall, including defining the role of the VP supervisor, the ideal time to bring a VP supervisor on board — hint: as early as possible — and the cost-saving benefits virtual production can offer. He also talks about capturing backplates, and the mechanics of playback on set.
Take a peek into The Future of Production Amplified with NAB Amplify’s series featuring top creatives and other M&E professionals helping to shape the future of film and television production. Gain insights into the latest trends in virtual production, cloud-based workflows, real-time graphics, live production, digital humans and other cutting-edge technologies as we chat with industry experts from AWS, Epic Games, Digital Domain, and more!
Epic Games industry manager Miles Perkins discusses how real-time graphics are revolutionizing production for film and television.
February 7, 2023
Web3 Amplified: Be a Fast Follower with Ira Rubenstein
TL;DR
Web3 is defined/enabled by next-gen technologies, including the cloud, blockchain, AR/VR.
Content is still the key differentiator. But how and where you distribute and market that content is also key to survival.
Community has always been key to the online experience. Virtual reality is now beginning to enable the next iteration of how we interact with each other in Web3.
Don’t discount gaming. It’s a direct competitor to video because there are only so many hours in a day.
In this episode of Web3 Amplified (watch ⏯️ or listen 🎧 above), Lori H. Schwartz chats with Ira Rubenstein about content distribution and public media’s competition in a Web3 world. They also discuss how AI tools like chatGPT will affect the future of content creation.
Rubenstein is chief digital and marketing officer at PBS. He focuses on continuing PBS’ digital evolution through the development, implementation and scaling of digital services and marketing content strategies. He also oversees the business intelligence group and leads comprehensive marketing programs to acquire, retain, and engage audiences across platforms for member stations.
Prior to his work at PBS, Rubenstein headed up mobile media company MeeMee Media and also had a stint as EVP of digital marketing for 20th Century Fox.
Following, Fast, Into Web3
Rubenstein describes “Web3 as a collection of next-gen tools.”
Specifically, he calls out: “Really, fully embracing the cloud. I think Web3 is fully embracing data and leveraging data. I think Web3 is embracing blockchain. I’m going to stay away from crypto, but blockchain technology, which encompasses, of course, lots of things. And then I think Web3 also is about VRML, as well as augmented reality.”
The nature of public media requires Rubenstein and his team to be good stewards of “limited resources.” However, he keeps an eye on trends and aims to have PBS be “a fast follower,” even if his team can’t “always have the resources to try and fully innovate at the forefront.”
Whenever something new comes down the pike, Rubenstein says he considers, “How would I use these tools, these platforms — Web3 — as a distribution platform to get our existing content out, as well as create new content that our PBS like onto these platforms?”
And in his role as CMO, Rubenstein contemplates, “How do I leverage these four new tools to help drive the message of the type of content we have at PBS and the type of content our local member stations produce?”
Going back to being a fast follower, Rubenstein knows he, and many others, are tasked with evaluating up-and-coming technology and then implementing it according his organization’s needs and goals.
Testing the Web3 Waters: Crypto and VR
For PBS, ventures into cryptocurrency and virtual reality were both driven by consumer demand.
Because member stations really are viewer-funded, individual stations had to determine a policy around crypto donations. Rubenstein’s team was tasked with the “how” part, and he proffered different, pre-existing wallet options, rather than building out a specialized (and expensive) “crypto donation platform.” Cost-effective and flexible.
Perhaps a more surprising example comes in the form of the PBS Short Film Festival. Films are sourced variously, including from member stations. Rubenstein championed a VRML platform as a means to showcase VR content as well as reach viewers who are already occupying the VRML world.
“It’s in a virtual world,” Rubenstein emphasizes. “How do you bring content? How do you bring marketing messages that resonate into that community for what they want to engage with and talk about, and hopefully, consume?” The film fest is one answer.
Connecting in Web3
Rubenstein advises those who want to understand the future of technology to embrace it via play and experimentation.
“I like playing around with technology,” Rubenstein says. But more than that, he says he’s “always been able to just look at it and see and play with it and then think about, “Okay, how is this going to help me in doing what we need to do in distribution? How is this going to help us in doing what we need to do in marketing?'”
Rubenstein says that same approach is useful in for other roles: “Come at these technologies thinking about ‘How I can use it to further my goals in what I have to do every day?’ And if you do that, I think you’ll think of creative ways that you might want to try something here and there, without going full on in and tweaking the whole boat basically.”
At PBS, Rubenstein can deploy his small but mighty R&D team to experiment. That may not be a luxury everyone can afford.
Additionally, if you’re not quite as comfortable with emerging tech as Rubenstein is, you have a third alternative: turn to the (dreaded) Gen Z cohort for inspiration or assistance.
Rubenstein says, “My advice, whether you’re a seasoned executive or an up-and-coming [player], is to actually play around with it. And if you’re lucky to have children, ask them and they will show you” what you can do with technology.
To put a finer point on it, Rubenstein has a 22 year-old son, who he says is “spending an awful lot of time in the VR world these days.” For his age group, that means VRML social clubs and online hangouts. Observing this behavior, Rubenstein is reminded that “communities in the internet are not new. Going all the way back to my pre-web days, AOL, CompuServe – we were serving communities of interest.”
Some Things Are New in Web3, Though
The desire for connections may be a constant throughout the ages, but artificial intelligence’s advanced capabilities are a truly new element of Web3.
Rubenstein notes that AI face-swapping apps, AI-generated audio clips, video dupes, and more have been around for some time.
But consumer-facing tools are increasingly convincing and effective. AI image generator DALL•E and chatbot ChatGPT both caused quite a stir in 2022, prompting creatives and knowledge workers to consider the security of their livelihoods.
Rubenstein says, “Someone even told me, “I don’t think I need my lawyer anymore to write terms of services anymore.” (Neither PBS nor NAB Amplify recommend eliminating your legal department.)
“It’s very powerful, but I still think making content is hard,” Rubenstein acknowledges. “And I think there’s still going to be a human element.”
He’s also a bit of a techno-optimist, siding with those who hope AI will improve content creation without eliminating its soul. “But could that these AI help you make it cheaper and better, in a way, so you can make more of it and more accessible to everyone?”
“I think it’s exciting. I don’t think we know quite where it’s going to go. But what I do think is that it’s going to make making content a little bit easier but at the end of the day, I think they’re still going to have to have that human touch.”
It’s easy to see how artificial intelligence is appealing to someone whose job is to ask: “How can we innovate at scale to bring the most powerful tools to the system?”
AI is already aiding in content proliferation, but don’t expect it to help with the “quality over quantity” problem.
Rubenstein’s focus, naturally, is on the survival of local public media stations. PBS affiliates survived the advent of cable television because, he says, “public media content is different and it is higher quality.”
However, Rubenstein is aware that Web2 is already presenting many challenges: “You’re now competing against the history of film and TV every single night, on top of whatever local live event is coming.” And that’s in addition to content created for “YouTube, TikTok, VRML, OTT, wherever…”
Rubenstein also cautions M&E pros not to forget that “gaming continues to grow, and I think people really underestimate how much time is spent on gaming, especially in that younger demographic. And that’s a real threat because there’s only so much time in the day.”
Nonetheless, Rubenstein says, “We just have to maintain focus on who we are and who we serve, and it is hard when there’s so much change going on. But staying true to who we are, I think is really important… advice to other media companies. [You’ve] really got to understand who you are and who your audience is and just keep the noise down, and don’t have this fear of missing out. Because people, when they make knee-jerk decisions and they panic, they make wrong decisions and they end up paying for it later.”
You can learn more about Rubenstein’s efforts at PBS.org.
Lori H. Schwartz interviews Stephanie Pereira about Web3 and the creator economy. ‟People deserve to capture the value they’re creating,” says Pereira.
May 31, 2023
Posted
February 7, 2023
Amp’d About With Emily Reigart: Sustainability in M&E and Barbara Lange
TL;DR
Barbara Lange founded a consultancy that specializes in helping media technology companies form and enact sustainability plans.
Lange bases her sustainability beliefs in part from the United Nation’s Sustainable Development Goals, updated in 2015.
Sustainability efforts extend beyond climate advocacy. Gender equity is one such area that Lange is passionate about as a woman in tech.
Barbara Lange is the founder and principal of Kibo121, a boutique consultancy focused on guiding media tech companies in taking practical steps on their path to sustainability. She recently completed 12 years as the executive director of SMPTE. Prior to SMPTE, Lange built a career in scholarly publishing at Springer-Verlag and IEEE.
Watch the interview (above) or read the following summary of our conversation about sustainability in the M&E industry.
By addressing sustainability, you’ll make your business more attractive to your customers, to your investors, and to your workforce.
Barbara Lange
Understanding Sustainability
“I define sustainability as meeting the needs of the present without compromising the ability of future generations to meet their own needs. It’s actually what the UN defined sustainability as in 1987, when it really started becoming part of our lexicon,” Lange explains.
“I also go further and look at sustainability as the intersection between the environmental, social and economic influences because you can’t have one without the other.”
Specifically, Lange notes, “We can’t build a sustainable environment without considering the impact on society and the economic drivers.”
Much of her thinking is based on work generated by The United Nations. The UN drafted 17 Sustainable Development Goals in 2015, which they have committed to achieving by 2030.
“Look at the UN’s 17 Sustainability Development Goals,” Lange says. “You’ll see that it’s a breath of issues that include all sorts of aspects of humanity, from equal access to education and gender equality, as well as the typical environmental goals.
“So sustainability …encompasses all these other parts of the social good.”
For Lange and her work at Kibo121, she views “sustainability as much bigger than addressing just the carbon footprint. And while that is an essential part of our sustainable future, we have to look at the bigger picture of how it impacts society and the economics.
“And I personally, as a woman in a technology space, also want to encourage and empower young girls to enter into high tech fields, and that also feeds into those sustainability goals.”
Lange understands that sustainability still needs selling. It’s part of her core mission, and she understands that organizations need to view the business case, not just the moral implications.
“Climate change and sustainability are the issues that are critical to your business’s future. And it’s here now, and we can’t avoid it,” Lange says in her elevator pitch. “So by addressing sustainability, you’ll make your business more attractive to your customers, to your investors, and to your workforce.
Your sustainability plan must be “integral in your business, and that means being integral into your strategy and your workforce,” Lange explains. “And it really becomes a new way of life. And I think that’s what’s so exciting about it, because you just you’re being mindful about sustainability as an essence around your business activities.”
“So even if you don’t believe in the data [related to climate change], operating your business more sustainably is also going to offer you efficiency gains, which naturally will reflect in cost savings.”
So how does an organization go about creating and implementing a sustainability plan? Lange works on a five-point plan:
Form a Green Team. They will be in charge of understanding and then managing the situation as it is on the ground.
Benchmark sustainability efforts. Take quantitative and qualitative assessments. This will help you to understand your basement energy consumption and identify where it’s possible to adjust.
Plan, set goals, and prioritize.
Track your progress and analyze results. Make necessary adjustments.
Celebrate your achievements! This should apply to all the various stakeholders: coworkers, suppliers, customers, and others.
Lange says, “Then you do it all over again, because it’s like I said, you’re starting a journey that becomes a new way of life.”
Lange notes, “It’s actually not all that different than what we do on a regular daily basis in terms of business strategy and such. But it is mindful to include sustainability at every step of the way. So as you develop a new product or service, you think about the financial budget. Now you have to think about the carbon or sustainability budget, as well.”
Initially, Lange explains, you’ll probably tackle your carbon footprint. “You really have to look at your business and including the value chain from your suppliers to your customers and understanding how your operations work and where you can find those those low hanging fruit.”
“Some of the low hanging fruit is… about efficiency gains.” That can be as simple as switching off lights and turning off and unplugging unused computers, which “can have a significant impact. And it also then saves you money, which is always a good thing too. So other ideas are around travel and the use of technology. As you dig deeper into such things, you can see how how are your systems actually set up.”
Digging deeper, you’ll want to investigate: “Are you using cloud services versus on prem? Are you using bespoke hardware versus off the shelf? Are you using standard industry standard standards versus proprietary processes? All of these things can contribute to the understanding of your current situation and then how you can make adjustments going forward.”
Where We Are Today
“It’s a global problem, but it’s not being addressed at the same pace around the world. I would say that Europe is definitely ahead of the game in this in this area. And so what we see is that that happens because there are a lot of regulations that the EU has and mandates that require organizations to address sustainability and disclose carbon footprints and other measures so that those requirements really do make a difference.
“And you can see the changes. Also in Europe, they have a different situation with the energy crisis as a result of the Ukrainian war. And so there are different reasons that these are drivers. What we see here in the US is that we don’t have those requirements. But there are there are certainly different drivers. And even if we don’t have those specific requirements as they do in the EU, there are things happening in the United States.
“Firstly, President Biden’s inflation reduction Act, which passed last fall, is probably the single largest investment in climate and energy in American history, and it’s enabling lower energy costs and incentivizing the adoption of clean energy technology to combat climate change. And that’s happening both at the business and at the private homeowner level. So that’s really inspiring. The other is the Securities and Exchange Commission.
“The SEC, which is responsible for regulating public companies, is about to make a change that will require listed companies so public companies, to file sustainability metrics along with their financial filings. I believe that these two actions will get the U.S. business really more tuned in and driving towards a new sustainability pathway. Finally, I would also say that the next generation of workers are going to be big drivers for change as they think about their future, their careers, their workplaces.
“They will be a positive impact, I think, for business to behave in a more corporate, responsible way and really help push this towards that goal of net zero by 2050. So there’s not a lot of time to get there.”
Sponsored by AWS, the program comprises three honors:
The Sustainability Champion Award will be given to individuals driving sustainability efforts and programs.
The Sustainability Leadership Award will recognize organizations that have launched or completed sustainability initiatives in the past 12 months.
The Sustainability Product or Service Award is for products or services launched in the past 12 months that significantly improve sustainability or provide sustainable market alternatives.
Media & Entertainment has a big environmental impact — think carbon emissions, waste and energy use. The video entertainment industry’s carbon footprint has surpassed even that of the airline industry, prompting technology developers and other companies to step up with innovative approaches and practices. Explore handpicked articles from NAB Amplify to discover why sustainability is the number one priority for M&E, along with the latest trends in creating a greener future:
When it comes to internet use, companies, consumers and standards organizations are considering new sustainability practices.
October 15, 2023
Posted
February 7, 2023
“Kendrick Lamar Live in Paris” Brings Cinematic Production to a Streamed Event
TL;DR
The video production of the recent Kendrick Lamar concert in Paris employed multiple digital cinema cameras in a livestreamed outdoor broadcast.
The production relied heavily on Sony equipment, including the company’s digital cine flagship Venice camera in both Super 35 and full-frame 6K configurations.
Other equipment included an ARRI Trinity rig spanning the area from the stage to the floor, a spidercam, and a robotic rail-cam system “that acted like a sniper,” able to boom up and boom down precisely while maintaining a beautiful frame above stage height.
Camera technology that started out in the upper echelons of cinema have now become so accessible that the use of digital cine cameras and lenses is being use to photograph sports and music concerts too.
Normally, such cameras are used sparingly for cinematic depth of field cut-aways in live sports or in glossily post produced video concert footage.
The video production of the recent Kendrick Larmar tour took this to another level by using multiple digital cinema cameras in a livestreamed outside broadcast.
Perhaps that isn’t surprising given an artist of Lamar’s caliber. The Big Steppers: Live From Paris, part of Lamar’s “Big Steppers Tour,” was streamed live exclusively on Amazon Music and Prime Video from the Accor Arena in Paris this past October.
Kendrick Lamar’s The Big Steppers Tour LIVE from Paris.
“We didn’t want to just use a prefab camera plot,” Ritchie explains. “We really wanted to understand what would be dynamic, what would be a great storytelling device, what lenses would feel more immersive versus objective.”
The amount of technology used for the shoot was astonishing, as detailed in the Sony case study. An ARRI Trinity went from the stage to the floor for specifically choreographed moments. Two additional Steadicams, one on stage for fluid live moments, and one in the audience, captured moments with fans. They had a robotic rail-cam system “that acted like a sniper,” able to boom up and boom down precisely while maintaining a beautiful frame above stage height.