After doing sequential graph execution in LangGraph, I wanted to explore the conditional and parallel execution graph flow, so I came up with a contrived example, where I have expanded a simple RAG use case.
This repo has examples of RAG with Human-in-loop, where the execution stops to take in the user input and than proceeds as per the user input. The repo has two flows:
In this flow we have expanded the RAG flow to account for user input.
The flow has following steps:
Here is the flow diagram auto generated via LangGraph and Mermaid
Here is general Graph flow in short, details can be found in workflow.py
# All the Nodes we need workflow = StateGraph(GraphState) workflow.add_node(RETRIEVE, retrieve) workflow.add_node(GENERATE, generate) workflow.add_node(GRADE_DOC, grade_documents) workflow.add_node(GRADE_ANS, grade_generation) workflow.add_node(OUT_SCOPE_ANS, default_ans) workflow.add_node(HUMAN_FEEDBACK_AFTER_DOC_GRADE, human_feedback_after_doc_grade) workflow.add_node(MAX_ITERATION_RESPONSE, max_iteration_response) # Entry point of graph workflow.set_entry_point(RETRIEVE) # All the edges we need b/w nodes workflow.add_edge(RETRIEVE, GRADE_DOC) workflow.add_edge(MAX_ITERATION_RESPONSE, OUT_SCOPE_ANS) workflow.add_edge(GRADE_ANS, END) workflow.add_edge(OUT_SCOPE_ANS, END) # All the conditional edges we need b/w nodes workflow.add_conditional_edges( GRADE_DOC, should_ask_for_human_input, { HUMAN_FEEDBACK_AFTER_DOC_GRADE: HUMAN_FEEDBACK_AFTER_DOC_GRADE, GENERATE: GENERATE, MAX_ITERATION_RESPONSE: MAX_ITERATION_RESPONSE }, ) # all nodes where the graph can divert to intermediates = [OUT_SCOPE_ANS, RETRIEVE] workflow.add_conditional_edges( HUMAN_FEEDBACK_AFTER_DOC_GRADE, route_to_retrieveagain_or_end, intermediates ) # This is either or flow workflow.add_conditional_edges( GENERATE, grade_generation_for_hallucination, { 'not_hallucinated': GRADE_ANS, 'hallucinated': OUT_SCOPE_ANS } ) memory = MemorySaver() # we tell the graph to stop at this step graph_sequence = workflow.compile(checkpointer=memory, interrupt_before=[HUMAN_FEEDBACK_AFTER_DOC_GRADE, MAX_ITERATION_RESPONSE]) graph_sequence.get_graph().draw_mermaid_png(output_file_path='./graph-human-in-loop.png')
In this flow we have simplified a bit and removed the Human factor for simplicity.
The flow runs the Answer Grader and Hallucination check in parallel.
The flow has following steps:
Here is the flow diagram auto generated via LangGraph and Mermaid
Here is general Graph flow in short, details can be found in workflow_parallel.py
workflow = StateGraph(GraphState) # All nodes workflow.add_node(RETRIEVE, retrieve) workflow.add_node(GRADE_DOC, grade_documents) workflow.add_node(GENERATE, generate) workflow.add_node(GRADE_ANS, grade_generation) workflow.add_node(GRADE_ANS_HALLUCINATION, grade_ans_for_hallucination) workflow.add_node(CHECK_ANS, check_ans) # Entry point workflow.set_entry_point(RETRIEVE) # connection for the nodes workflow.add_edge(RETRIEVE, GRADE_DOC) workflow.add_edge(GRADE_DOC, GENERATE) workflow.add_edge(CHECK_ANS, END) # This is the Parallel flow, where we say a node fans out into 2 nodes workflow.add_edge(GENERATE, GRADE_ANS) workflow.add_edge(GENERATE, GRADE_ANS_HALLUCINATION) # this is the 2 nodes fanning in into a single node workflow.add_edge(GRADE_ANS, CHECK_ANS) workflow.add_edge(GRADE_ANS_HALLUCINATION, CHECK_ANS) memory = MemorySaver() # we tell the graph to stop at this step graph_parallel = workflow.compile(checkpointer=memory) graph_parallel.get_graph().draw_mermaid_png(output_file_path='./graph-parallel.png')
main.py
file is the entry point. This file imports the needed workflow and executes it.requirement.txt
file, we can just run the following command pip install -r requirements.txt
.