Xplain AI

Learning made Easy

What it does

Xplain AI enables real-time conversations with documents, media files, and websites. When users upload documents, an extraction pipeline converts text into vector representations using the embedding-001 model. These vectors, along with the text batches and file URLs, are stored efficiently, leveraging Firebase batching for large documents. After vectorization, users are directed to a chat page to interact with their document. User messages are embedded and matched through a cosine similarity search in Firestore, retrieving the most similar text for responses generated by Gemini.

For media files, a preprocessing pipeline analyzes and converts them to text, which is then embedded. This pipeline uses the file manager API and Gemini for in-depth analysis. On the chat page, a "study session" button transforms chat messages into Q&A format, allowing users to convert initial chat messages into Q&A pairs for enhanced learning. An online editor facilitates the editing and downloading of chat messages, beneficial for researchers.

Gemini integration is central to Xplain AI, enabling embedding, similarity searches, media file analysis, and Q&A generation, ensuring instant and accurate responses. This integration ensures users access the most relevant information from their documents in real-time, enhancing their research and learning experiences.

Built with

  • Firebase

Team

By

Team Salone

From

Sierra Leone