我的开源项目Flock介绍(英文版)https://github.com/Onelevenvy/flock
📃 Flock
Flock 欢迎Star https://github.com/Onelevenvy/flock
🤖️ Overview
A chatbot, RAG, agent, and multi-agent application project based on LangChain, LangGraph, and other frameworks, open-source, and capable of offline deployment.
work flow
Agent Chat
Image
Knowledge Retrieval
Human in the loop (human approval or let the LLM rethink or ask human for help)
Flock aims to be an open-source platform for developing large language model (LLM) applications. It is an LLM-based application utilizing the concepts of
LangChain and LangGraph. The goal is to create a suite of LLMOps solutions that supports chatbots, RAG applications, agents, and multi-agent systems, with the capability for offline operation.
Inspired by the StreetLamb project and its tribe project , Flock adopts much of the approach and code.
Building on this foundation, it introduces some new features and directions of its own.
Some of the layout in this project references Lobe-chat, Dify, and fastgpt.
They are all excellent open-source projects, thanks🙇.
👨💻 Development
Project tech stack: LangChain + LangGraph + React + Next.js + Chakra UI + PostgreSQL
💡RoadMap
1 APP
- ChatBot
- SimpleRAG
- Hierarchical Agent
- Sequential Agent
- Work-Flow ---On Progress
- More muti-agent
2 Model
- OpenAI
- ZhipuAI
- Siliconflow
- Ollama
- Qwen
- Xinference
3 Ohters
- Tools Calling
- I18n
- Langchain Templates
🏘️Highlights
- Persistent conversations: Save and maintain chat histories, allowing you to continue conversations.
- Observability: Monitor and track your agents’ performance and outputs in real-time using LangSmith to ensure they operate efficiently.
- Tool Calling: Enable your agents to utilize external tools and APIs.
- Retrieval Augmented Generation: Enable your agents to reason with your internal knowledge base.
- Human-In-The-Loop: Enable human approval before tool calling.
- Open Source Models: Use open-source LLM models such as llama, Qwen and Glm.
- Multi-Tenancy: Manage and support multiple users and teams.
How to get started
1. Preparation
1.1 Clone the Code
git clone https://github.com/Onelevenvy/flock.git
1.2 Copy Environment Configuration File
cp .env.example .env
1.3 Generate Secret Keys
Some environment variables in the .env file have a default value of changethis.
You have to change them with a secret key, to generate secret keys you can run the following command:
python -c "import secrets; print(secrets.token_urlsafe(32))"
Copy the content and use that as password / secret key. And run that again to generate another secure key.
1.3 Insatll postgres,qdrant,redis
cd docker
docker compose --env-file ../.env up -d
2.Run Backend
2.1 Installation of the basic environment
Server startup requires Python 3.10.x. It is recommended to use pyenv for quick installation of the Python environment.
To install additional Python versions, use pyenv install.
pyenv install 3.10
To switch to the "3.10" Python environment, use the following command:
pyenv global 3.10
Follow these steps :
Navigate to the "backen" directory:
cd backend
activate the environment.
poetry env use 3.10
poetry install
2.2 initiral data
# Let the DB start
python /app/app/backend_pre_start.py
# Run migrations
alembic upgrade head
# Create initial data in DB
python /app/app/initial_data.py
2.3 run unicorn
uvicorn app.main:app --reload --log-level debug
2.4 run celery (Not necessary, unless you want to use the rag function)
poetry run celery -A app.core.celery_app.celery_app worker --loglevel=debug
3.Run Frontend
3.1 Enter the web directory and install the dependencies
cd web
pnpm install
3.2 Start the web service
cd web
pnpm dev
# or pnpm build then pnpm start