This project supports the grounding of LLMs with KG usecase. The aim of this use case is to visualize the future goal in Graphene such as : 'We provide Grounding capabilities in Graphene'. The users with their LLM or LLM of their choice from langchain component modules can ground the LLM capabilities with the Knowledge graph generated. PLease refer to the issue for more details on the usecase: https://gitlab.eclipse.org/eclipse/graphene/tutorials/-/issues/25
# Docker Containers
Here we have 3 docker containers:
1. GLLM_Databroker
The databroker is responsible for acquiring data ( user's query and usecase specific document ) from the user and pass it into the docker container - Parser model.
2. Parser Model
The parser model is an OpenAI's LLM model that converts the unstructured usecase specific data into structured data ( i.e. in terms of triples - entities and relations ) for constructing Knowledge Graph. The structured data is then sent into the final docker container - GroundingLLM
3. GroundingLLM
The GroundingLLM container contains neo4j as the base image, a Langchain component and an OpenAI's LLM model. The structured data is used to construct KG in Neo4j database and the Langchain component access this to etxract relevant information from the Graph.
Finally the relevant information from the graph and user's query is sent into the OpenAI LLM model to generate response that is expected to be accurate, relevant and high in quality.