Next Story
Newszop

How Google's 'most popular' photo generation model got Nano Banana name and what it is officially called

Send Push
Google ’s latest image generation model, Gemini Nano Banana , became a viral sensation, yet few knew how it got its popular name. Now, David Sharon, a group product manager for the Gemini App, revealed the origin story on the Made By Google Podcast. He explained that an employee named Nina initially provided “Nano Banana” as a simple placeholder name, and it unexpectedly stuck, turning into one of AI's most recognisable names.

“Nano Banana was created by a PM named Nina. And while she was submitting, when you submit a model anonymously to LM Arena, you need to give it a placeholder name,” Sharon said.

He said that the company wanted to mask the fact that the model was from Google and test it like any other image generation mode.

“And I would love to tell you that a lot of thought and rigor went into the name Nano Banana, but the truth is that at 2:30 in the morning, Nina had a moment of brilliance to call the placeholder Nano Banana. And that was the name in LM Arena,” Sharon explained.

He noted that the company “didn't expect it to go viral”, adding that people were already calling the AI model Nano Banana based on this placeholder name on LM Arena – an open platform for evaluating AI through human preference.

He said that when the model was launched, people kept on calling it “Nano Banana”.

“So we've adopted that name also and hugged it because it really is a great name,” he added.

How Nano Bana is different from other AI photo models
Gemini Nano Banana stands apart from its predecessors by achieving breakthroughs in character consistency and highly imaginative, complex visual concepts, according to the executive who said that the difference became immediately clear during early internal tests, particularly when generating images of individuals.

He says that the key differentiator for Nano Banana is its ability to accurately retain the identity of subjects in generated images.

“The first time I tried Nano Banana, I uploaded an image of myself and asked to put myself in space. And all of a sudden, for the first time, I saw myself in the image and not my AI distant cousin,” Sharon said.

The development team knew this focus on consistency would be crucial, as prior launches demonstrated that people "really want to see themselves, their loved ones, their pets in the image and imagine them in new ways and transport them to new scenarios."

According to Sharon, the model also exhibited new capabilities in handling abstract and conceptual requests.

The internal "Greenfield team," whom the executive called the "model whispers" and "expert prompters," tested the limits of the new model by giving it creative, multimodal challenges. The results revealed its power to blend disparate concepts:

Conceptual Blending: For example, the model could take an image of a couch and an image of a potato and correctly generate a composite image of a couch made out of a potato—effectively creating a visual "couch potato."

The ability to successfully execute these high-level, imaginative tasks demonstrated that Nano Banana offered capabilities well beyond simple image generation.
Loving Newspoint? Download the app now