# thinkai **Repository Path**: mirrors_Azure/thinkai ## Basic Information - **Project Name**: thinkai - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-10-29 - **Last Updated**: 2026-04-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README  # ThinkAi [](https://thinkai.live) [](https://github.com/OptimalScale/LMFlow/blob/main/LICENSE) [](https://www.python.org/downloads/release/python-390/) [](https://www.langchain.com) [](https://platform.openai.com) [](https://huggingface.co/facebook/bart-large-cnn) [](https://nextjs.org) [](https://react.dev) [](https://www.typescriptlang.org) [](https://tailwindcss.com) ThinkAi is an LLM app with [Retrieval Augmented Generation](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) (RAG) that talks Philosophy built using InstructGPT embeddings, Chroma's Vector Search, LangChain tokenizers for text chunking, Meta's `bart-large-cnn` model for summarizing, and OpenAI's `gpt-3.5-turbo` model for structuring the final response. This is wrapped with a NextJS web app hosted completely on AWS (AWS Amplify, AWS Elastic Beanstalk, and AWS EC2), code here - [ThinkAi UI](https://github.com/maanvithag/think-ai-ui) LLMs are trained on the internet making it hard to know if the generated response comes from a reliable source or even if it is a product of its hallucination. RAG helps with adding external knowledge forcing the model to generate a response using this context. It enables more factual consistency, improves reliability of the generated responses, and helps to mitigate the problem of hallucination. This is exactly what ThinkAI aims to achieve. Performance evaluation of ThinkAI with ChatGPT here: [ThinkAI v. ChatGPT](docs/thinkai_v_chatgpt.md) # System Architecture: