- Harvard Business Review and Boston Consulting Group recently teamed up to assess how readily available artificial intelligence tools affected productivity and work quality. The results have been released and broken down in a post on Substack and HBR.
- One of the key findings? Effective AI users can be categorized as either centaurs or cyborgs. Centaurs divvy up certain tasks to AI, while cyborgs prefer to work in tandem with their AI of choice.
- The study also revealed that it’s important to use ChatGPT and its ilk for tasks that are in its current sweet spot — but humans aren’t always sure just what that is, complicating its benefits. Researchers dubbed this “the jagged edge of AI.”
Generative AI may have moved into the realm of fact, but that doesn’t mean we shouldn’t take some cues from science fiction and fantasy.
Apparently, the most effective AI users fall into two camps: cyborgs and centaurs, both navigating “the jagged frontier of AI,” according to a new study from Harvard Business Review and Boston Consulting Group. What exactly does that mean (asking for the non-nerds among us)?
First, what is the “jagged frontier of AI?” Mollick explains it like this: “Imagine a fortress wall, with some towers and battlements jutting out into the countryside, while others fold back towards the center of the castle. That wall is the capability of AI, and the further from the center, the harder the task.
“Everything inside the wall can be done by the AI, everything outside is hard for the AI to do. The problem is that the wall is invisible, so some tasks that might logically seem to be the same distance away from the center, and therefore equally difficult – say, writing a sonnet and an exactly 50 word poem – are actually on different sides of the wall.”
Now for the involvement of the centaurs and cyborgs.
“[T]his does not involve any actual grafting of electronic gizmos to your body or getting cursed to turn into the half-human/half-horse of Greek myth,” HBR’s Ethan Mollick writes on the One Useful Thing Substack. “They are rather two approaches to navigating the jagged frontier of AI that integrates the work of person and machine.”
“Centaur work has a clear line between person and machine,” Mollick explains. Specifically, “Centaurs have a strategic division of labor, switching between AI and human tasks, allocating responsibilities based on the strengths and capabilities of each entity.”
Notably, the BCG study observed that “centaurs would do the work they were strongest at themselves, and then hand off tasks inside the jagged frontier to the AI,” Mollick says.
On the other hand, Mollick writes, “Cyborgs blend machine and person, integrating the two deeply. Cyborgs don’t just delegate tasks; they intertwine their efforts with AI, moving back and forth over the jagged frontier. Bits of tasks get handed to the AI, such as initiating a sentence for the AI to complete, so that Cyborgs find themselves working in tandem with the AI.”
Mollick says the Cyborg methodology works especially well for writing, for example, and Jim Louderback also writes that this is how he “interacts with Midjourney,” calling it “a conversation with AI to deliver results.”
But here’s the most important part to internalize: “[R]egardless of the philosophic and technical debates over the nature and future of AI, it is already a powerful disrupter to how we actually work. And this is not a hyped new technology that will change the world in five years, or that requires a lot of investment and the resources of huge companies — it is here, NOW.”
Now, its revolutionary nature doesn’t come without pitfalls. Mollick cautions that some AI users can become over-reliant, “falling asleep at the wheel,” so to speak, and miss clues when AI commits an error. Then there’s the homogeneity issue, which the study observed, despite noting an overall higher level of quality when utilized by humans for the right tasks.
Still, Mollick writes, “In my mind, the question is no longer about whether AI is going to reshape work, but what we want that to mean.”
SPECIFICS FROM THE STUDY
If you’re curious about the methodology and findings of the study, look no further than the working paper (co-authored by Fabrizio Dell’Acqua, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine C. Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R. Lakhani).
According to the abstract, “For each one of a set of 18 realistic consulting tasks within the frontier of AI capabilities, consultants using AI were significantly more productive (they completed 12.2% more tasks on average, and completed tasks 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group). Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores.”
So what does that mean? Essentially, all users benefited from AI supplementation, but those who were under-performing at the start saw the greatest uptick in productivity and quality.
However, this was not universally true for all jobs: “For a task selected to be outside the frontier, however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI.” In layman’s terms, if a consultant tasked the AI with a problem outside its sweet spots, it was liable to make a mistake and the consultant was unlikely to catch the issue.