diff --git a/Memory_Graph/README.md b/Memory_Graph/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..f8e501add31e10aa5c501826c4b9534ade8e4720
--- /dev/null
+++ b/Memory_Graph/README.md
@@ -0,0 +1,43 @@
+# MemoryGraph: Long-Term memory
+
+As powerful as LLMs are, they still struggle with remembering past interactions. A Long-Term memory (LTM) solves this by storing, retrieving and managing knowledge across interaction  enabling continuity, learning and smarter decision making.
+
+
+## MemoryGraph pipeline
+
+The pipeline demonstrates how a standalone memory component can function - it can either store a data or retrieve a data. The pipeline contains 3 nodes :
+
+- **Chat-interface**: The chat interface node is responsible to get the user's query and pass it to the neo4j_db node.
+- **Neo4j**: The node neo4j_db contains neo4j database running a knowledge graph, AI Agents and LLMs. 
+- **Shared-Folder**: The node stores the intermittent files during the execution of the pipeline.
+
+This use case integrates LangChain’s StateGraph (a DAG-based orchestration layer), Agentic AI, prompt engineering, and a Neo4j-based knowledge graph for persistent, structured memory.
+
+ ![alt text](media/memory-pipeline.png)
+
+## How to use it?
+
+1. Use the chat-interface to pose/ask your query - It can be a question, a statement or an instruction followed by the keyword "store:"
+2. The query is passed to the neo4j node when Ask is clicked. Use the View process button to see the progress
+3. Results will be displayed on the chat interface when answer is retrieved.
+
+## 🚀 Working principle:
+
+We use Stategraphs and Agentic AI together to combine structured desicion making with autonomous context of actions enabling flexible and intelligent workflows.
+
+  - when a user submits a query, its first classified as either a **store** or **retrieval** request.
+  - For a **store-type input**, the memory agent checks factuality and performs Information extraction (IE) to store the data in **Neo4j**.
+  - for **non-store input**, the retrieval agent searches the Knowledge graph using Primary-method with schema and Fallback-method for answers.
+  - if retrieval fails, the system automatically invokes the memory agent for storing the input as new information.
+  - Agents are instructed to provide contradictions if any, during the retrieval and storing process.
+  
+ ![alt text](media/LTM_Flowchart_MG.drawio.png)
+
+## Future Roadmap
+- Persistent memory node in AI-Builder.
+- Combine With other LLM pipelines in AI-Builder such as Travel advisory, RAG, etc.
+
+
+## Contact
+For feedback or collaboration, reach out via issues or Discussions.
+
diff --git a/Memory_Graph/media/LTM_Flowchart_MG.drawio.png b/Memory_Graph/media/LTM_Flowchart_MG.drawio.png
new file mode 100644
index 0000000000000000000000000000000000000000..31f81e35e92d0ad97001209caa562a9bc2cc4269
Binary files /dev/null and b/Memory_Graph/media/LTM_Flowchart_MG.drawio.png differ
diff --git a/Memory_Graph/media/LTM_architecture_MG.drawio.png b/Memory_Graph/media/LTM_architecture_MG.drawio.png
new file mode 100644
index 0000000000000000000000000000000000000000..b4c15e9658d21ba49c2c24fd8c58807b216d3f62
Binary files /dev/null and b/Memory_Graph/media/LTM_architecture_MG.drawio.png differ
diff --git a/Memory_Graph/media/memory-pipeline.png b/Memory_Graph/media/memory-pipeline.png
new file mode 100644
index 0000000000000000000000000000000000000000..1fc5168010f239c973758c4a87102357e8e54e62
Binary files /dev/null and b/Memory_Graph/media/memory-pipeline.png differ
diff --git a/RAG-pipelines/RAG-Node/README.md b/RAG-pipelines/RAG-Node/README.md
index 07bd7ed8e8da4cea1d0a20be6c80d732388efcf7..209fac17133d74d1b08b59f85860e1e913fe2558 100644
--- a/RAG-pipelines/RAG-Node/README.md
+++ b/RAG-pipelines/RAG-Node/README.md
@@ -1,16 +1,60 @@
-### Current pipeline architecture:
+# SmartRAG - Retrieval Augumented Generation
 
-The refactored RAG pipeline (single-node) is designed to handle the connection to the unified LLM interface.
+RAG (Retrieval-Augmented Generation) is a model architecture that enhances text generation by combining a retrieval component with a generative model. In RAG, relevant information is first retrieved from a large external knowledge base, such as documents or a database, and then used by the generative model to produce more accurate and informed responses.
 
-![alt text](image.png)
+## SmartRAG pipeline
 
-Key features
+The pipeline demonstrates how RAG works with the chosen LLMs in AI-Builder. It consistes of 4 nodes:
 
-- Bi-directional streaming is enabled
-- Loading and ingestion - dual possibility (PDF Upload and FAISS embeddings)
-- Save workspace for future use
+- **Rag-node:** Upload the usecase specific document and enter the query for the LLM. The results are displayed in the Response section along with the calculated metrics and collects user 's feedback.
+- **LLM node:** Contain its own chat interface to interact with the LLMs without RAG process.
+- **User-feedback-diagnostics node:** Displays the overall efficiency of the pipeline
+- **Shared folder node:** Stores the necessary data for the pipeline.
 
+ **Note :** The pipeline to maintain it in running-state for streaming data. 
 
-This pipeline is currently in developmental stages and will undergo frequent changes. Contributions, suggestions, and feedback are welcome to help improve this project.
+ ![alt text](media/Teuken-Commercial.PNG)
+
+## Features
+
+1. Use the **rag-node** to upload a usecase specific document and pose/ask your query.
+2. The query is passed to the LLM node when *Send* is clicked. 
+3. Results will be displayed on the chat interface when answer is retrieved.
+4. Metrics is calculated and displayed along with feedback option. Complete the rating section to submit the metrics for evaluation of current iteration.
+5. Use **User-feedback-diagonstics node** to see the overall efficiency of the pipeline.
+6. Use **Capture execution data** to get the execution metadata and upload it to the pipline's document, if needed.
+7. Use **Prompts** option to set/edit your own prompt for each run or for the whole session. 
+8. **Conversation Memory** allow LLMs to remember its previous conversations and provide personalised chating interface.
+
+## 🚀 Working principle:
+
+  - When a user uploads a document, a FAISS index is created and stored in the shared folder.
+  - When a user submits a query, the LLM node tries to retrieve information from the document based on the FAISS index and provide answers with the help of retrieved information.
+  - The LLMs can be switched in the AI-Builder because of the Unified-Protobuf defintion in place.
+
+The output is expected to be reliable, accurate and reduced in hallucinations.
+
+## Metrics:
+
+1. Faithfulness
+Faithfulness is a RAG metric that evaluates whether the LLM/generator in your RAG pipeline is generating LLM outputs that factually aligns with the information presented in the retrieval context.
+
+2. Answer Relevancy
+Answer relevancy is a RAG metric that assesses whether your RAG generator outputs concise answers, and can be calculated by determining the proportion of sentences in an LLM output that a relevant to the input (ie. divide the number relevant sentences by the total number of sentences).There are few more contextual relevancy, contextual precision and contextual recall.
+
+Libraries: Deepeval, Ragas
+
+3.  Star Rating 
+User provides the rating based on their own assessment for each iteration of the pipeline.
+
+4. Sentiment Score
+User provides a textual feedback based on which the sentiment scores are calculated for each iteration.
+ 
+
+## Future Roadmap
+- Adding more document features and providing RAG for multiple documents
+- Parameter tunning such as  temp, top-K, seed, etc.
+
+## Contact
+For feedback or collaboration, reach out via issues or Discussions.
 
-Please refer to the following ticket to better understand the pipeline structure, eclipse/graphene/tutorials#45.
diff --git a/RAG-pipelines/RAG-Node/image.png b/RAG-pipelines/RAG-Node/image.png
deleted file mode 100644
index 4601824d56056dd10bd4364a5edc394182206354..0000000000000000000000000000000000000000
Binary files a/RAG-pipelines/RAG-Node/image.png and /dev/null differ
diff --git a/RAG-pipelines/RAG-Node/media/Teuken-Commercial.PNG b/RAG-pipelines/RAG-Node/media/Teuken-Commercial.PNG
new file mode 100644
index 0000000000000000000000000000000000000000..247ea2f34109a6a6b1e63af7f3e50172329e84e9
Binary files /dev/null and b/RAG-pipelines/RAG-Node/media/Teuken-Commercial.PNG differ