Share

November 7, 2025

HubX integrates Gemini 2.5 Flash Image for low-latency, contextual photo editing in the ReShoot app

Sertac Çınar

Sr. Product Manager HubX

Vishal Dharmadhikari

Product Solutions Engineer

Pascal AI showcase hero

HubX is a global technology hub serving over 300 million users across its portfolio of mobile applications. When developing their latest app, ReShoot, they aimed to democratize professional-level photo editing using generative AI. By leveraging the Gemini API, the team achieved a remarkable development velocity, taking the project from the start of MVP development to a live iOS launch in just two weeks. Shortly after, ReShoot claimed the #1 rank in the US Graphics & Design category on the App Store.

The app’s goal is to allow users to alter the scene or style of a photo without losing the natural look and identity of the original subject. For developers, delivering this level of complex, multimodal reasoning within the stringent low-latency requirements of a mobile experience presents a significant architectural challenge. To address this, HubX utilized the Gemini API to build a sophisticated photo editing pipeline that balances high-fidelity contextual understanding with exceptional inference speed.

HubX

High-fidelity editing with Nano Banana

To construct the reasoning engine behind ReShoot, HubX worked with the Google team to integrate Gemini 2.5 Flash Image —also known as Nano Banana.

A primary technical challenge in image-to-image generation is maintaining subject identity while interpreting complex scene requests. Unlike traditional pipelines that often require chaining separate models for text reasoning and image synthesis, Gemini 2.5 Flash Image is natively multimodal. It processes text prompts and image inputs in a single, unified step.

This architecture allows ReShoot to perform conversational editing (image + text-to-image) with high adherence to user prompts while preserving the core identity and context of the uploaded photos. Compared to alternatives tested, HubX found that the Gemini model offered superior visual comprehension and multimodal consistency.

Reducing app latency by 40%

While high-fidelity generation is requisite, mobile users expect near-instant results. Any friction in the creative process can lead to engagement loss.

By standardizing on Gemini 2.5 Flash Image, HubX reduced the average response time for updating and manipulating images by nearly 40%. This critical reduction in latency transforms the user experience from a passive waiting state to a fluid creative process, which is essential for retention in consumer mobile apps.

Streamlining development workflows

Beyond immediate performance gains, integrating the Gemini API significantly simplified the HubX development architecture. The team utilizes Google AI Studio to prototype and test prompt chains before deploying them to production via custom Node.js packages connected to their mobile backend.

Prior to using Gemini models, tasks involving multimodal data interpretation often required complex custom logic or the chaining of disparate models. By adopting Gemini 2.5 Flash Image, HubX consolidated these tasks into a single, coherent modeling framework, reducing architectural complexity while improving inference speed.

What’s next

Following the successful integration of the Gemini API, HubX observed an increase in user engagement, as indicated by higher save and like rates on generated content. Looking ahead, they plan to evolve ReShoot from a single-purpose tool into a comprehensive platform for native, seamless photo editing.

HubX’s implementation demonstrates how developers can leverage the speed and native multimodal capabilities of the Gemini API to build intuitive, high-performance applications that meet the demands of mobile users.

To start building with Gemini models, read our image generation documentation.